var/home/core/zuul-output/0000755000175000017500000000000015131170507014525 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015131177571015501 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000311246315131177437020271 0ustar corecoredikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD Ji.߷;U/;?FެxۻfW޾n^ՠC4%_̿f\ϘקjzuQ6/㴻|]=ry+/vWŊ7 .=*EbqZnx.h{nۯSa ׋D*%(Ϗ_϶ݬvGR)$DD D~m{]iX\|U. $ॄKЗ/83Jp ώI8&xėv=E|;FmZl8T*v (6pk**+ Le*gUWi [ӊg*XCF*A(-aD~JwFPO7M$n6iXύO^%26lDt#3{f!f6;WR.!$5 J:1*S%V!F([EbD]娍ԹiE03`Cfw&:ɴ@=yN{f}\{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5n|X&pNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIo>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&;嶑, }t&&\5u17\I@ 5O? ʴ(aPqPϟ'+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;!6~ö9M( Pnuݮ)`Q6eMӁKzFZf;5IW1i[xU 0FPM]gl}>6sUDO5f p6mD[%ZZvm̓'!n&.TU n$%rIwP(fwnv :Nb=X~ax`;Vw}wvRS1q!z989ep 5w%ZU.]5`s=r&v2FaUM 6/"IiBSpp3n_9>Byݝ0_5bZ8ւ 6{Sf觋-V=Oߖm!6jm3Kx6BDhvzZn8hSlz z6^Q1* _> 8A@>!a:dC<mWu[7-D[9)/*˸PP!j-7BtK|VXnT&eZc~=31mס̈'K^r,W˲vtv|,SԽ[qɑ)6&vד4G&%JLi[? 1A ۥ͟յt9 ",@9 P==s 0py(nWDwpɡ`i?E1Q!:5*6@q\\YWTk sspww0SZ2, uvao=\Sl Uݚu@$Pup՗з҃TXskwqRtYڢLhw KO5C\-&-qQ4Mv8pS俺kCߤ`ZnTV*P,rq<-mOK[[ߢm۽ȑt^, tJbظ&Pg%㢒\QS܁vn` *3UP0Sp8:>m(Zx ,c|!0=0{ P*27ެT|A_mnZ7sDbyT'77J6:ѩ> EKud^5+mn(fnc.^xt4gD638L"!}LpInTeD_1ZrbkI%8zPU:LNTPlI&N:o&2BVb+uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òƓVgzEY9[Nj_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף 7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU Ry==qgJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?z%lOONRѦmDVmxюݏX}K6"Qi32\-V_kR(I-wtSJR^m{d a|y,F9$^@mdH֙toN1 < ҷBq/)i_TA|S2GtyK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { O݂9x 8/{&Ά+4*Iqt~L4Ykja?BHyݪkIf-8>V#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'$ 4Nػ޶%WK($Xdo< 0Zͦ5oEfV-9[V1 ceN]X]](%!HcQ~DUaRY/A})JY쫒ddwW5ȹ/ U*D!YETSi[Hc_q.W 8[xD {/uQVfc*~Vܞ$_~}W<[5'R<(}UEXtV3ľǵ($}Ue=e%))6))[1:))[at%2t1}n|y!keЅHyqnh}^\1ӰCz?7Ww'oL›>/,ǐvhϋ+hc"0Ỏ6Cf17 쐋MZ3m@淅0xT5-ϊjޫ\pMG8>dX aXZbĠ&=uO4d?I7q>#kcT\2oH7sM?%Nng#;Mmb\gw>>8A s4 ХNoml#04G|͞ S5P6} ;f ' [ӵւתx&xw(OΙgOcS7 ݼ0 ]-lL%E3X^ [@xDƑ4TXLexɓxO|mVǴ `yc8~`MqH/+ro1"!5lxI>~ E&U!2k?gMZ8ubIyV³^>aR~(c zbq=d"ʨWa2Áa 6`h;xFOv]C,Gچ0̩gS[ lݴ,aaw`*Pg5YMydu~FAuNcדX>ʘ#VE OQڀL˫9Ȓ{HzRh`md~li7i(oHMAEsTt ^G(O@Q9'G(EG\ kTR4Q-*[xt.h^UyrrGwLyJ,|KbWwKx8$Օ< K X9>?fi0X6ݫz VI]ΫB$K=Y@Zwy/<3ԗfA[ʹGR':*:Gၜ% :WUQ:=ڀPVk73<]j+יۥ 5]jswC卡 kpqdY:9< P Ճ?TVO ]ϥch`])Y j$g?qk#nM[fԇ :87װR8y}Lð7lϲ\DeI5*Ts.+{ 7毼=>A ~nU to9<,$u ϧej\nrvw:Ec:E/" <ɫƩ#"i>0tb$@ɦ~O A|Ve4짶FBz/KϺ+w Z/hKt-㳿ʣuw BCh>-X\sQ񣛤Hksw/Ei*zXy9 O׼oy5'zo:nfҤCzrTWY7^qqY(n! }Bu1y~Y קj0 Monw@ j]@Ӓ*|&ef2Rë!(\b]IKZYxqE$ ]'"` )*]<.BD @k13K"G$+mQK`W|W4>%ш!rvGSwXd8p`y(hbL" K<O pʳdߺUC)A5q$+v|r c?*D eD+fXTЂ&pxо ܭh\sRE}l8S[MQSmsJmh]JWrZ5pv5WWŁe?ω6ZCdJJݗ9nn9)r n)shtMtK};=]E0 !b&˃Y->]s=齇}; mH1&X2u\R{O߾7<{[CG oj o%Uc[VW0^,Z_9M'M"x|gWd=߆Lhh6ת9y'%gWtu"]wFϷ)FUU-jL bGVdDҽqϣ"R$ZtA,`+ Ծʦ5斸}mAmՔaΛ$BnDw=}k ]kET*϶B>7s.jEew NduPm_Eh`Z]ǽ+R 㺜NI$ꪉb]e`)6Eےr [Ar%yvٖYMֽmP(|=Bv˷ᄌ.> rU.H$\J0 AdGW9 OPV1Պ|uZ+0]U$ h h35m-:1G#w j$gPIk<8P a܉psl%Ble˯j5c7><` pCpM2uՙSm`0 >mmzɗx.^Y,wzl-Nx/ZXf ~QpXEF@+h$9cN^uҠ*;eV hPO#ʈbPDyS%B}xFVu7<~.&ȵ=5NJ97Ҳm&Ma] ߞ턾TxozZp]x: Ʈ+{MKC@mC(aZ jl.[O]C[Z%ẃe3LCIMF R/YS h)8Ҳ4^=EFRu fC! - b.:nxx6&VR&L[kSDb*%u 9ZJyESV2;s@ZawhtU-ե"1*۞vUUY*d/TY2.!,QF¶xዬz #SJ!$_9]ӀL#l `8㸇3UW2*g|S$`U Df4 KyUUPIO"]b^,PNqc,O3?}p]br.iU!-`6 C,oM=ɫLU-ȑ)LZ;ū:.3Uu;3 e"*ыMk^OK5sU\e ƶ ao)3ݞe&3$a]ipE**,R.{Ѯ<ٲeTZgO$_ ݤ%iU׍g)(vaU](Z$&L75jq#{Knq34>P|>U3&fz=xilBk~da 1{8MhqMn ԇ@CLlBo5@C'x9`c c,`q4~xzl47v57f6an4=T}f0 (C zm Pe!qT-p~ `8sc{Yw@2ö3}R+fv;NUZ!M'2F-1K)^Qc^< ý7 ?H[`d(sRHbuP0#V$u65 (X՞I"eB3yŗēF0v\=FKvɒSN 2xs5{K}-ї!.wov4KS~WO$Eʲ#r̵I# `~zy>4__Ϗ_ttFο<ߍ `[;3@&] Mtd P (BT;emBDnwEx€ؽz\]died`h)"f.n#{-鈓`ich}7<g/~&yRNطEAx<H4Oٯ8VT߿/pT `\ډD»;p.E ֮?Bگ)DϢ]rA@#a׈5P1<Yg vEX(.A8mu;O'xd (aݍQ.$,A n7|*J#ލ6y7N;#w- r@\ڍG]"i?Ew#({)p.<#lݍ\oyޒ YPceҏ 乨DZ {JusYrRwC P}[]̓,&i8]9 -Yh4O6ߓCd%ºy^$cVUox̣iQlG1&{778xԋd-;/xD|noߓiddcbo·_O?%w3T7CMjopm\պޙ)]WZ'jrMSX;h%",.Ы{IXaD-9R04(cQ/7ެrDR掴LR"7LuJ60cX7tYQ0I<{q@G);KEڠ 2)O8V hhXVCNvENH#oP7 bq`EB8dvg2^^ QQ})R/}y> w/c9I-4j2>ch$JW£W||[{4Qq /]=tF4X3$9iv0GLS噀 x{IOO&t*m|,`E4̯3ܔ|O4pK݃wazw*~u-KrC·T -'Crd=rPXCσ>빑86`JiGC+ZP&7CsԴÅA ˢ\Լqg1aߧE,~G#P0c IKoaLЯz}KWpXSJpVcT>L:dm(& Qpzʶ~} ԧ63U69a>SGr\:+d6Qo:˱ahu{:tlurEd>o?.!x W>=Rvw?S%gdA"Z"RH- ohk=N2JDɤdf%Qa4? `v#$٧(gs p*Lw#{DB 0ߡwx2skzMbxyp;beΗ^@}PpXi.\~|?t>[~?ϼ? Lw}ۉQ#!XVKosX% a{{>ctIT mc':V|JdqO nrV[I ŷȧPgΪG-$9.'!@[$^ /j B>* @{oeI(]P>t;B醄u]jRN n٪tfkrîղ4mqӺI-[X>l;Rن5kjOkʛ5ʷ#oH(_P$TAXPbCB&r B* iA>v:A$]Pw}Bu7$]PIzmH]j9ØT,-*}OnBC)J¸Bro}g^gJCޝ^l<2|EE PfEd?\=p4Wᣯ)πڸ 8?ҵ,h|xny%x=Yc;^(1M#̗f?F tu1C}&Y02-se 4{cS] )5'l8e7kPq#xXի,T<Lh(1@Hb~E1 N)T8ERɬZݭ`.n[ԌdC[]}6& 3 df4mML}TO;dSKLYD-qMkVB rTJG.u!U"AU1#uuz|0Kְ]&9_= 6~]!$+mIWE0ڥ:6o{k!ZkN0ϕy-ΔDK65YeEoOUW[$k8OH06}Զ3Nع}8`mD}}g/}o-7  G[Q^JÔ*sqXd˃%?Ukm0M(a\cJ">h7U}V7ђQ_i0)AW`tiN]RNRL&]'b*8_CLd^ |?ݨ%(W`8CQOE^t rm-uH{PUEm;*A|x\<&@3x(`[:P/ʑ͉Z$*y./'[<$7<Ơï$qewPW` k7M g?w.) n*WL4oxȻ޶W< p1JQ"$[@{犎]ι H|ڿ꼳'/D)d{?ғ8_owOw~_ooC\oIGZ{$urPx94._wF==Oۉ>znf=^M"wZw0޿~&j#>;_pgR9ac{MwOnU9T{”|bCw 2Lg'h)_ᄅV_m3@I4j~tRit]oq;vL.R/k?3ý0ދ11n:t b_(Tzg]!Їg_9$ly-XI2-u`B.Kc'$)_{t?lo 6uK9AI9fLTLgkXrֳtRQm!v]˙3CbCVZS;&m*'gЏd"{r $ńwxQV²PuD @l##p2p!ޒ@PnZi"&FJ;{]F;Q@TV?IRcpʆ9QipiEk)ed| ƝOI}zdwdP9msĪ6Rdt1 |_ec)voÓq Ls8MSXxCswۑ̸ ǣnIfv W8=x,MCejV[t^: .PѸË4Ugti(p jwOB3AIk@-DI#Gs$8:?GQ|rQqHn",@xA4Fc,tc1 ,Y +pCJZV."$].ߍK۷f@h[Ҙs D8]Hbi)n.P (Xh- QP !ϒ<xI(~.Rr;KݣއK\B%^MI8~ow$88Z%dJԩ‹bqi8"$]O  I17-(oV(̂-qe HfpQ)@Ђ%m%jVDTfIȂ=$BP}퉫H4Θo+p峾@8 "T\Q!c;$z7F4Z.잒\ݼP%cΘ †]+>Xe.xބԩSKTCI8?PnykToip5 Ua/8tbaנs<v\fh pK ,B'BX.VhY ރKQe4!̟5|5w:g'x#RLI#I^w~kGc*z9z6%jxUnN)$/w[Vb}=h.NM«f & Dh_B&H_[uU]gDut)pHX [/#8ϫ㶨)\MHۉ,G_bt6a`sd`ǝ:_pXd7VW!1B@L0 NF( G`$ hI'wiJ: \b/ZVB& KvJ 3mߦ_{P+Y%c-5cG/>6Peªճd*ZpK74ÿnm-Y\.WՐF1-xp/C9 :׳>k;g zCT%ceM#[Y/Wwck%#Fv 雂WF›57u16@`$];0E8.J>wdJvUEvs>*w͐'ʸYucZ,gʀ\[#Ǘd DM+`-Y47X%ca/0Pv_5>ހxSL7Aa7XHX21~pƏޡ>";qra{ͣ.M X! 2R! yzwנLCNi#$]dz#юR6\ ^QHFKoE:| sf&]8~~$8^H78pQT,Py `ļmN> <1vۭ٪:SZ@jp$w92V/OLX98PM;V ˍ}ދ蘗@@f>#'1NSNǽ Dw44aZHKVvN͒F$`Ue9D抾I . }\ _ĎNV ͳ#\cV+hPiqbkjkdH Kw| #ѹ6>e 1VP^FU a0N {X곭fD b5FLzr" E;xn.U{IZ`HpLl pGGw`jLUcKǂ 1b>D%6 shVls C<ӌVU$y*\]II,i%]o4׻*C Eiu|lp'%XG+`6!4|ѵp[G}`$Q7%Dd[hLdbh-ƈ^2L;GΑ885wFm}̅Y`W{vI`81B2ϞtH})G'ɆH749`A5й ͜ը `L|䔽K:Dl#?>m6Gn9SѩÙ;7&Cڍ 5IfwP|t!?׫͎,t0c\X;FX:%40 :s ZpTdtzuo$A<3`Yyj{buO"eL"?M^͹.=H٘*L}0u FU+BH 4s /CGCc;0q"DLKPv&*MbL‚G;uIW5fGxxݑd `obM & okȔL";0,E.VjԴ;e~EYw"<ȖEr?ݬߚ 8RTm>: {{dEIx_  K0ml\ÿޒ936$_j=6ձ>͗^+@-;oMh-ZI!P-!@@a/cʈ qGD-vTC'cT+ҡ­)iY1?f~B ]:򅙶Ǣ=2U꫟;}I9݃G%G=IL=KjM (Eg\ΡwgA8rAҢ{˻PZ+-hۼYwlH$  cdOR}Du]>DCi{] f.w$g:#rJ*V=˾ u4Hn|=|.)p4WE.o* kQ':eݲ f"]4;\('v%^) fi;N_^uLl-{c6bvBfݹyS,IR0Vm"wSΛՒ񩾓n*ӼsoóUF]WiBjpn%&&GMtY4 R?\BKDgvY7ٻƍ$U%{#seF&c%xWM%#ɔdr2m.^]mPH5tCzϫT9-D!۵ƒ.iC:-u뉱)~6}8M؝ xs + __0릞2d͘%"bfSΩ_JoYbś Sq,ǜ^̖]/SM?w˙d)|$O:>:t弧;>\$XDC&`t`8) 7li23yCQLHq[zUI(i|H@UP!?eֺ,uE cjJy%qPz9zY{:{~}z.^:  da4:3I 2-qހMK9̇`v&eg?1z0M/>tKBo`Η>2TdR3>}w[N:5sf|Zs{Og#xb: r;݀ܣV/Ժ>8bswUZbix0G)zY4`|~>d~.›{ُ ɝz;>N=)G@i`c4]>φK*pv #~`0N/5ΰ&GX#.Z%>W%~xxx_%1W鏃K=>{[^( ;p0wkk09^%\5F#t.7`Wޜ}u Ig0z/_c< <uyW Œ1@н*uy/|)C^="AZcɶ?fzYZI2wL7%2d9\=9b^x%ȃe?-$K{$FTj7*rL߹ 㳴IGëC)Y6 6? 4ꍅq?ŴqzwsM^UfEQ:@0 p2QjwAU"-гp27\6ux9 sgFBSfs ήaߊrueȒ7B s&().f CDH.W'ue/J1W߃rFLfmq=*NCĩ,aǒD\QA>(s4^]TEП^y;[Q ʶ("O0"NO;Y7}y̍O^C7yMlҷs[hvG;`tS|]To{\.]MZxχAn1?ϏAHn;~Bouו#ip&[x'+7=Nj>-t>7# =Cfܘnl(5Q>LsMu9QCz)ۀzu6 +avURGUPy HصXljѺ: H)R!==lmU!mAYA!رUX֢_R?uH7,7^7`?pE{APԆA0a)6hz# Fi u !P ibNj4Hܲi%iԦS"OBr`nIZo")ەQ ?xkR> aTEn&e{6I{V5!U?0I' ܮ3)}:\gĨZcW^#d]~8{w[W VjҸf6`XUʮ抠ƞ/lBk Y?qŸ;@Re\ &%nXHf*2zm\(/+=.>>UY~6CFR#ҿ 7w,*ePiaǛ7wϖ).!ڙ>ܖާ웏_bf..LwSqY"< f-G#ENSO=oJ&1iy1zZrG2z&}UpiUsi^ynY%)à zܣt_QW?ÉUN#x";60Z5DehGL}mMUi78E8rQrWVs P+ms cV"Y}7́pSL9y x@y"GԮlZ6+h:&]]JN$UٸS#1At 0| UܶcH]{m@}'(޶:)Ju3CHbka+-bTP%YT Ҧy=+&r۳+P+nn8Eg$Ʋ"m?F)*{54C%E=xp$o N[M?~=3v4.ٵѸ8H ɑ^.x#EU`V q@BHr*8BPIɰTRLq졿zY~źgbi˦)>W _lr*;;SR=,K6LH*ҁǙIm)XD"6q i.Q@<-u;%u;d3_n2[9h̨]8^G\(RG8**ɱf`"^L n56G:iutsKƪ*ztRd$LrFsj@"h@Nզ A=*8Y[2vTK2vhJ1+f4ɚƄ!lvلL:ތ`ɝ|PfJYS%R{݆9BZKLe~q:#~tFcpM9}G?OI%NVQף U>9ҿlNPD,T"&*dOdfp6tƿHoJcEOoW "!aH ӓ3Qhk%*H-G +(MH"!DyoMUXnSͅ1Xjy=çC0Y (е#F#&EДTF5[C `듿IM(m7| 93;L,3=eP0Џa*mGׄvf+^[͍s("aZ+OHx e!2lk[>_DafT{HIӐη! QX08Tg`ďI:-Ҏ̓]ҊLӸiPq >IVdJ|t;ũU'.ЂZ @֗ V N.HjUӸ竧1 |?nHDmmE;On#WxHju8=UlJqeu `=ަV|6FdSyhyֵ61CYP?/vg]uK] zç+:|Qlbf9Jg.мk? ju1/{o稖}Z-H,7+Zr Ӵ{5d]xuB!A%L 2ncIq 7nas.0˂.W|݇-$7{|sEl; A`}Ȕ؜ߎ`"O\i%Vqm:j(7Nڎ B  `#/F 60'H$O0,Z /U RZpZZ6OvWF WTMX m|2F7}2O%1[ 'QՎc.lE.l% L}qT];\mT^~I6mM")-n*YnImQzjX0wa!p9n""|0SqƝ.ɧ+?M^wo0'e:_0i]l:&6 GW'y cUHz6`V0׎\9BmR>Lᗦ͋$Y\m\2۩&0DZ5V=4͹i<_u_DՂ9c=żsXYq;y$ҋ_~ibΫ}pn|ӻJ_,}ۃ/jE`1`Ǵke'qyk)YB1ժ1Wvy-CZEAϾ8MAb!Ӳd4k. H __( æڝOY֫ΆdAnK7$H|C(Qؠ f/o@0 czRƏB t8T˪ÿLG㿮T5py铿އNiHw~UIkfBay9 ~7eGG7e]Ս,& oI,֤QҕŚp̅h Ebwd1(F'}ëb G/zWMZ>U#>T?.ª}AdAFJ5_T5U%Pm`NEt eUm?Uf srAtjzl1UEb]mSVM>əUIl`].fz|o?+5{ʼ)lR_ ZySLU1NW-\ppܻe[Rqm ywOTTGl#_tdmhM'jlyӉb h\rÕ>T6Ak|ݔQm\Ic7vpVC^4lr|5I>g2a˭ot3>Ƭu&(+_TQwA_ |+B7y"crnF[a8p(Tu$މ:FApN=$/5 GR۠ ?@De}B2ï BW!h1)T9pIiI {FAFt7#CRAe'UħcGi֠63;4Kم:ASW/C2m_< Ӭ]#0ާc叴=of`4_;Dv Bwn?@Bܩ\}=8,fc~f~է]) ٵpo73R/7@r0=af H7r ;;1 `W򋪓0i/ףZ{j҂FQUVע$vFoGAw\G!{}jM-&묖IdBpYL˕>|懕l!+zUn: A?-X?RCȑ"Y%\mvJerc)Hdl#y(F\l/G|퓭E8||A` e?nɔ .MQN=E ǑӚn(RɎ:_Y9X "fë"\ K'Yq~5zhDߏ ƴ-hn^*U/]c ; G x։|ՈCEE)qXl2Mk15x2 gV E51w0΁|tpl%`᳁V*mS8ӈQ{+ECI/*E]/^ HA. >(]'chJJh`#nD ,FZ$@5 GmʹT2BA[,9$V,ܢ=XCG]!7';l5 9>S.ux%J^9mLlq 8E8y*]p0"T9#Ex';eLP!y^)6pvsP8 =9X16}QQͻ/t{?u*77u t()Q>pWA٘QmS?~f_U:)1@lN(4IKdRljA'tqDtD5-?5ǫ֔:ME#Ji֯>+Оo!y\K~4?"+[@Sqr΀Zp:eUBy:Rn=!76>]!-(qW&Idh =4_Ǩ.V0GE9G8\֤Ȭ:Kp|qQ1ʂM89h~X[kF?mq%_1[홾_Ey %nvSX4|}LSiEn̾,v|UTUxirNCK%j0[yy^4Xߣ:SjsL:yϑgk_J` s+I:hU*TPs5b3R1e U.ĖU(d(HKA)87% |(7徱:bZbK Aߘ'ŀeYU2a2Lx/5]f4GR9ך:)Dk ,,df¸dĂv#FR1(΢ 6>Ls&4f<)RQ XݹZ0DgT>--(,q2_ӮG) ֖~0祯\eBk2}Qg8+P[:e>ξ>7t ]ϼ6$`Q0O3!#Wcp4Xri'ia]t2Lg5'XԞȰ6vgBDB̰߲{-εV.knxɲNQTptX*&8/'m;h:GAGe9ד|O!A#g7bn'TF3aZrUFUD tqczSaIa},qFQG8?Pb)Co+U ~:V3$`R ɑFKAhRPY.('iOוh(os5誀"F AD:hf|>傏*8rE󜮾%%| d~N2F}G"4HP8䚸2.'tx.{X.JxaHGc65_&Vh.)$YΑaz< b~g׫/\tkxfa{ب42ޓau-]]9f s_Pʶ͹I>cm3&jΉ*2+JJAB>w' c"GxiLw1.^Jkcp42nʼZ]aukY妫yŻ+]6ppCRw2~QdNFdfӥLo{BUtȹ@sMܘp%mз= Pyr=9wUdzs01s2dաhۭ{h͈2;9dæ4+13WFƃ#c/! r`ϥFM+X.,[E4wȹY4ո?Yoqf-3B>&alK7R3TIG f: }c{RjJJSR(-`4N8NN9|(G6Ntj M7bLbC1'&29ޕm:K)_!08iD`kqE5X3zn]CabwtK>#>PiscY>t5jz g[&|7L{Ƌqkauͽ:8Ng9(6(Rg3VlnW^E:C ,GGt3gV:j9S 8śqs֌Lק 3iC;cYmpGMi/lpc܌wѸNI>)!䅮a 6@<(1r=lI pb8$r8{Wh^G`f(Ͽz@u4f☝V>Fޅs'8+՛GuCd99m//]>^aLgcĹMOv?k~T;?jM4|:u8@MͷͯkFgqKb^eT#'&y= |(xn{&Vx>.j$PGK~~`͘X+nu5n"j1\ƃprSUX:hl*(a qH Sp茗uylVY F1#ڔ#w'qzl UAȪ-EIq%P>]viF G(Ae3 f}&b4rN9)64rj$g sO=zT"p E^4@ An7Gh|vGxx# D,SsrĽU툓Mϩ\yRw\HJ4s G.ZoJ ?-y96>lC R]5as. S#T]#Ղ~yոXw)e<QM jyMTrc42^nBdY|=E j7<l=٭}s9;R*HΰpјVn~;}&׷YOh9݄zu;hd}ɦ,5Nhe.UǤ.3L`ʲjT8J8/}u@߂@y"v5hbm_P|& W9#Ƴ/|X$R&{ؤ*$y!#x |ġ&#"R$Mc>*G'9ÙGΌX8zͦdTnmSr =ci6XQMIc߁'7Rpee(A]bCQCԄji/)ʱ26m}q;_VUkʵl8UEs]Yt¿cKvH |D܈Bǂhe1-JF$-y.I`EHv|]mwk=,rixb9'17p`˨.97t9M'{[fةLV`5cйE1asa^ p>sT+|P)JGdٷ|іJ܊@#v\pR0`H#R"h˜B:/;yY]IqaAeTf+DPZ+aRѩ?~DWcrw1O9Ţ-4] ,7*G#rU,&j{)Rј= |(&p\05(8x>[&KZ:q>}4)rlHm+rn7*v`.†Q=ڬ툍QLxMGuc#nH&DlKcW(&(PzTbZN:h~,mό5*#{6Yv^6Y淶ĸ;dz>SP!s9GظvYM|c@F.{w_\x-F ѫ>F /<_'6@ļ" >:r#lQCy&%3xs Y\L7,qg4bWG|7vG2zR;5XTn6Ljnf Uv/<ݼQC@8Z] i"~|f1lۥ mH!&l|=f Tg$1P<Ƀ['KG׋$VYU(GfNNGh}H]I}S&++ǒUQRj> h_-д|$UT/3ܿueȈ/~.KB2ETQ׊$:Xp) nKҎW,Wz>kqS{㏛=?-57bcYRԀKAҫͯ!gvoq}WTI$Ӳz"xzv;uLG>;C) Kg䩂r/5P0Yu|Pj@{څ^ÅDck|'ȨTIzOkIBp.,2 BK}(^-Q0VJ5U|Sm)< Tq"徖j' }L0`:n|&Z-Lpm{ɕ-ӂO焠LMͷmz0ۡ'7t_+<fA;0A*XeNM96]ځUư0O;xsb#W|6eo``Kca@8<נöE{ }i{KCδcԺh=~IN oܱ.Sl0%ߜu.PX%:v>JC& {;Wܹ -ᮉ VW0j}zm-D$J_!Qa- B&%z~|~ 0V Jjtw'jPkfiEDPxs1M_qeJ2S, "`ӂ6 "tgYrGW}kL=sa3`TR~8Ė)*Np9tLU%$ d@n?>b K4ny;iد3 !<eh"7xH)jϑ>z<$c&{;{x<׿HW= xʄܗsz_<د+C9gF)źyԆH'QB{>F`f‹vghw|x/vǿJ! `.*KX "*Ab>3ڶj1pGB [Mumf9^(+SLۊ.K.B |~2-%tVlmI4;i bRbcمm}'Q9 ]"p tZZs <3zW+RӚjBG03PX%Nq7nXHgQ9NX N(K$\gnpHj .= n_{{-ai5foEEZs҄Tx^1iMsS?t<{侹 ` x4 E,i{[2 wls[sS@4?*Ȩx|*66g@7<*tQRr%t8*ʻospR"v Ewo7OK==C*"P%\bZI/}zzw^t@h5Ŋ_ UYxtJ&OIXf3!<٩1iޫq7 އ(}MM j%ۤq9k&:13.U]PE zG<D5;[>Xw)F-0*8'&{Z D,iJw'AMCia-T'fU ?~*hZht?0$M&;th+놨-UTՊך=:n [Z*+C("yZJ*|F>U_x#z? _7  -En$gS د,C;4 QO>~Zl?m/CF綃Xa\!81^Uo(ؐ} :Cy <.7l:Bu$afZ3\xS)Kb)bd{MCu)0Tgi(16pXFN, :Qq5 1,u&)4ɼ^ZɦAZP:3<݁PxB4_j )GQa9kGM_@;_n/DlO?\*-4}8F[h$VaDlS$A cϠ1dE28C(elJ4y , &S״!UWYo^rB9.w>v( ċJzcbu& tyX76uM9VtS.w}ހ:S*d'Қeex5Qf?zeJJHtfo P<e͘|T"Y;="=,JXƜؤSo}vlQ*e\:*=丠)8Z5UCs0ۜϼIZU88Fkŏ`Wڝ|yWJj+kXJܧ|1G;Էn1Rͩ,ZsKRަݻ3T\MBج/ЭIABך$e]ǶoXmBxx[|xX Xz#*[3鳾<KM#% qĘ ȾSSi[htSo"#X!>7E>klh]fsڸfuF 0AZ㑧XSX~p% Iv#(=4!T鼮o犐; mg<"Pj֖桩"$#=UyS]1b O7xDAWLr>抎"*&y2]I >mrtV%IA f.eEB%Zd[ זV 䓉H(XMmԒub{KB3IesuDI 34-[S@TEH[3<No=[htV9FЊCQܫjFd44R$/muK;dݎBYn%d+^PWi% Y>,"i~Z;BGϔ?G,V`*!~dbJOݖZ z@-ү2*𚠗4*Š}u?_f.ƌ㍖sm! Ax+4zla"US OG5b/P_5/Q'V$ ^BܾCtE5s. FF1mMk {t!SNs`lj93%7$))uYxL BuK,d@+5Rrnx)A_{Ss,cOqY'd iW;>SyI+ߞ+N&kI pIb]y- Lf<$`.E#Lo?f/bWuNzYz6,}}ŌSIbǧbҩp472{ɦ_ ?֟ڣט"gZLQItu^>XGx$~Щ<C<Կlg]v˟m6yVI-EHKr}jflN{E7aӔ 8J[)BF(*gCnP7kLy.ؽ 0Nޠ / lUFZ\dULvQM:0NJ&*>VyH:}f.C-k#.T>%4$Jɒi]i-l#;QN,!xOMR;xFh 54, x˦ŧ;8KsR*U`fl7O}[\I+)iACqov -|5J(7 P)M0a3O~%IlDMM֛&nʼe^C'y3̎AWu9<'>,M#Uooaj,j> Ѱ*0jF]jts8W?Û˵荰(AQM53D]9ofs0e6“@'R&(7T?\|Zpa~pa0ڵhv\` 1Nr)p轍O@WApXW ˙:Ei'VyO!,MMbe婿Oؤ LR艵<; /~caBNWi7x*ݘt z(|`FWP(%Ekt"AGCBΟbRدK>hO%Cvh2XU1jIrH{Ws%f뷛]by?ipv{k.Ia%snSW)8(P UR~ s:d?yM:OV>&2s邯R->w;|s5FvSRj9DsF>-?lw2Yy% ϫCV7E|]U2X<[,|1{wߝ}.۽Kl+[CfnMɴg$2dcE *8pC%y0ђez/;sPym AւJ/Zf9n4Tҽ*[o_~9Ī qNB_nZ(A7V%-67V]*%?Y]*cb+wS_ho&']zݘ&}1-B8[ dowƓ{q't'GݶLu"ƞwqtb Ii%V&j]BbWwx<߾D9-w}9H^`+vQRDyJJLu Qvܺ\؏4ƫ(eCɒ4:JZ`}6AV1"Jm#?vqJr*zc^F Q!6Z}8\r!A}cB.)|j&8`7QA)m_j.WJp"s\D&h| K oدk |)(khAt+%&E 4G.WF[5GQ~4EI#%Us\mj9A֫U?UٹGϽ{7TIo_^ o̧̜M!NO~8[cC=C:b݂&61~RX=_c&_jǫg}͹+e>ȝP"D!z 9Ґa!KEM!6ޕ6#bea$ h.eSdj˒ZJU`JeKiEJdjP;(`˩+8瞺-[F8C~?=o.?]vR^taq~68bL&BlI3]?(j2c`B!x{TvZ Y?=zsïq 5DTh;&B늇IBj~*NYŴvWMIՆJ{ȉ4[_-nXO߳!I~qyӥL[+D92V~=kϢ E=[eUfQ1%,*5o/&.x?*7#\}R)AaHZ+ޙ rq{6Gs5.2 `vEI| ]?=zïqj^3f˽2`:HyL$\}B; MR Pj C!%ݭȗi`wcb*94Hn&BEm(+o~ y_ڵn:J ˾yn/0~g4$DԠ M:VEPȱxV/OWܹu[)v' \χ\~2\g?αw'~_H?~_S~oq`y2hIkTu  _^tS }tߺԭڟn̋䦫```mSoz7We"Yzp!mV{n1Y8q'ڮ_ hr;tLK8*>yQ }Ǖw$Gj*}0Hc~t B-`gUy^?l_g4ˆoo ,а~n3MV>O'xE ~ιt@M]\4\ONȫSl6]v/W7s;ez.>5im>[55ݟ.7^O^(]p#ܩ["պmtuDعA><c^t Lƥ_&?0Y%hl1X?z}uvm/Zw"Hbfn] :u1|zBn:MOU>;|}4~g~B7mX}; '[IDhh߷3J`Hӫ^\~S)XR26iۜ#i+r?8%K*41K%pp>6TQԁZWaUvpRKxpNAU axpP=< #οSM`_0Gap$sviCAt%\G]הB@(Is{@ƻXLדT+ GkA10urT W{ b׎%KRFԩdZ -ib sl- eo e`MaMF-K`h~av]yc qIp(4l  0]}YK[||* UFu0?S#F%.z UJ߆Gap͝[&MD>֌ hGu82ҴiyOay 7y(~Ic5jYf%gOUÒТ&'F9G iZ)Azxfmk;$;*!,$dB: x+M&6s<^W GapLu9nꒇ7Z 2጑& NcaRQJT ֊p׎UMz< LKEq M6}+3Co đ ׯe7Ћg?*CԇMFE TK( Pf2[V_)i'Ih`H+ต1jQ)/=>#CV?`fjHã48"L"2>n[p5L0ɇh=g,qSUh$(C} cSe? xS( .!smiK0^5TyLyt5F/ ?](-6€^{fGzxGa'$'8]roi3lwKuwlC3$ +&6pL=@sZ3 ! #WӧƘ!WD|c(Ⱦ]A+~L&kL&8h=tn4,_9P8=W$Ɣ~Gip{^ H D+zxL^m,H!5!{",Z67^ |ã48gpdUl7 ;3nq2E햱=_7Z (@a6FLK% 8װf _=\ɼ6֤X"mXAe 3vã08:zq33Q ]oeC3_.Ap` 6bP o׳iC;@Gy,zxSZ"9O|ҩYF+c6I^Stp"( M='58 y9]=(Ts@Ng l]o( ץ\@`6+Hͥސ(۵Q3$ĎvÏ.>֍ct')wĵ!iZAgFZ+Xb팪r~aP23%Ofn,2C{ 'w]p[Y)~i\L!Zjۼ!\ AUh!~L3g~T:Ƙ~59VxI.)x[oN d庻\C( w_]$=b;.TS3qwcTjeW4sG7ƘAPDT*5\_pܢ2&롄ã08rYWˇ^hVKA@+&] S&nr*Xq- } K7T61oeta4T_Χd1uϰ56T+G65G#GbQ!>UG~6TA9߬@V8,ٿ%^ g`m9GX][/&=;L0.j]PGap2 A+ NqSCыdkH+Vs.DCG=_CNI BbD bt&CdΨ`2K>U#(9S!tA1 #aJk%XCKr=%nw@ iuqbu!#[*c3*QxQ'aq"2?z4[Uuv@&E5YF \*VQ/ HʂOSzӔ?.'8XI׏Jc|=< AQzu* O4"rU^ƫrT>V<صSwծӀ.(~<{O,8Rkభ)g=)A5`P( N ;Jx.2ky)"@v33sG-`\ >Q|4}>W$ɗ$N.l9$aQGapZMRl5*ցڀRnzxG]PF~1m@Lc  n;>XLQ,_ lT Q3AHHTݘ-[[c!=q=< L|wf}`@AK⃆tu`>tã08̴۲Q;1Ļa^( -\u$'aJ1) O)oUJ하(4ejoYFrU/'L'~e#bP~bU;{IK/Hҝ*U-c0=_PYw:" >Ġ0,XQGXR6TϧS|=CYlFE 7ċ"(,Ct?TۈgugZGBE7WE@S8ʸi,]tA'k\ê r~EaI _ 94QEGcTU#J:T8㤳O)HͥApu86r f{ P) >Zo<%FP6TQ;r ^ӝByrgIIs[KbB0-OczgU;'k i1sؽMNu>sӺKgj,5QN:R2Gky݈mF/'q^h໨&[OPT=\GvJu^,.uS.l?DXKp=So =L0߁BS,zgC=w ,Y}5YgRLP?O(T,\Y޷_2BµIR]*Djœ[()aX*aq$sDxv 'C yAdRPhxb)DԁV:VV_ B=ہB)}v8v| }0 y](:8B !ċv<_Cef {⦣ ia8yH)*BIgۙB%,{g~}} o;\o| =0/ }\<.`7/O/t\DeW ;+{gnQr`mN8sUL @xv=4~7M5X1j^~ݢv?Ƕ3pK}%:N@M`}֔הtwujaŎ=ƭ Y]_U/ Uq2Sa,6-e]/ȁ?8V"r]íŠ]'s>Yx'C]^Mu^Vo^/d"Ipɫ->񽞔n*\­~|=Xb}~;uX*^0s=_Qa)Sgğ/T=Itj^o"<%eQ΀lvYF!MJOXf2,ZnVD.M>Xrg|>>˾]Dp.za^=D<| }0ϒOUD| }018(| }00ַrh{@]| }0XҷɵK=w 3?^>SyDǻY| }0OqRK6٠ܻ,vBe" Ma^.i0߁B̃N}%wzbCl=w };Tta+`iў1߅B?Syou1)3"ĜͲ1 1%@q&@qOxNéְ Zrk F7*e$-dbOT 3mE%Fm煿J4Kck|Bl`gaj.;|X\ܡ Ӈ&2kS2_6ti4(qB*{{%wq]a>W;0V5'׹(.Ϟ O{2ɮ|gcBy9&Xߊ,͹74޴ټ+֝k΃/ _n \-0 }7x)x|孏wC7rt7u5bJ۴xZY TX)ć/+H(wWQmZMh ձ'MtٔЎBÓjoio6t0l6歛,~AF3!ʍg}y{ tvHi"=6[ӢT%((((XQt~}C5`, FZg "xL ŔȅZybٯ3kc" m([Wpup嘿&io(*LԯQ=`n࡛mBͤ,.wn}:+!!h!B!hahh!B*B4@2yғc~hٜ`Y9OXCx6 &M83lg"Si_q+M:Hx^nY0Zj -Kif-Gr/5g'{q]o7gǫ`OguvE=gڗ% F,#(K)> bĩ8Ne\Iۅp~zЅ~ 4D ÜE=th1v:dgk{8l7S(bHbX(&B PB, jy`ZbbX(!?A- QE QEBTU$D QEBT0CTU$L!CnXJu-Yutu`_7⁝ W Wp8\!Wp3 19?Plއ.EDX: S&XlĐDZ)j\pHj]i,Č"W0nT>gvǾZ.f]x,==HSV˖b6?%?L⎐H̴Zhg\XCМ!#TyLa"Fr$mBq,JR@p!v,eh*aZrXG$ gV!Zqgnzn7|jl-ЈnU-8]#&m~=6ȷdlbXZ(rEgnjۏwWg yj?i{'٨[6}RH.}nsD4fTL%$s)63k%w2KR,5rQaT8B|ϘH7tf2/ uyf*0HSBQˤbc#i.#@_D>u9.EݭlI`.nl%z;V[o[-/qE?Y2JyK_>J#UZ ;>βWs$:׹}z뢅˗7LRtHBxM('EQD[A7!!q /L$bj,E:3МxnS&nLfTAȥ!\$\*^5^tqstN3YB[g# *Ƿ:TVMrr|Ld1 FI94JAK7YCkS9*QśhE.2Vzt>;_<8nI[Ņ~?γ켖-Uk6LUQ^?g5o%܉˟0jJPkM>.ڪ!ZMSYN`ZR_O.u/Ƴۘb$ZL+#RU?4p1+>SIKY䧍⨨OvUVYMDYJD??qiqqq`6*#o_%[+qn#J2*.&YZVau h5~9 o`<}~/^x)&OO~| 7>|bno5@ЀUpsUC}˪d]f}rxzE,v [M!v|PUglE2׵..G9%ql LɘI2O`Icp,1Ī`E.w9ˁWۚ_`9&~8-C!&URN;|gA6=; `d[JR&Z%Ԝkge v()g!H2Gh8Tt7أ;ICOZh;lfhqveJh8t*t:nub $?h`K!BR !@H1R !0œ :e>^?Ah FQ1c?m7}o>1ʃ[qkC#=_Og-V J[_ٻ 7ۍTc Pհ[_ͬU}Obiy [v;5♯Z1A=Ccr#j @RkzLPT[ yC72*nIwt+IgIi J۴x TecYR_>ݝ!9[ud.?^BlX(0~Hk 3]U+G(Z8N4,'V)ʀ{,t>OgxLg `"V^D N eiRޙtXȄr$KI t>ERaem=D 1XfsP<O0JUuV¨S"n +b} gB8N}&f& !8:E(De25c#6*3 2Qq:a"cډ~hEiYIzqm9J31Lܬv'X9OEq=}lՎOUm-h69(Fo=d2xsH Kp}u|cjT/6ݷw4γ@`el5B HH=w`4 OCapYw7JE T(oTұU{jOAy]ő_x9 /\\Cȣ/_Dgb~ lu`( =3{iv\D@kq$Ia^ gzi`1ݳRBd-UI:I(ZrE*/"##g, F_xk,NgGS%g>UtZMv7fɞۢVǻ>>p`_?&^_OrOX~pyǧ\#q?Yg/ŽkeM'>ߞ{~WgF|' Cp&Lt}f/r Ip?#(^3q.i@*?3͍Tsvɨ:m  4'bx0^Me6yq~db؉Sϫ } wǯ'8A쌀c0D͟l67e;eaj $xp1$U&Iة2IL*$2IL*ّLIReT$U&IIReT$OIRT$U&I*$2IL*$2IL*$2IL*$2IrReT$ʤ$2ILrAq _x,UYR,UYRmfJWf4!c;B@hVH  z'(h)-bkYI5(-t&{E-] 5E‹D蹬Ȧ_w=Θ%k)_V(HP.ՅM \Y\;u׍҃ cc\7 )L.%Ho u7k urml[xhM@IـPקc*tSMg1tSMg,TihːO#ށx̌;,D!"X1PUHy`8k0@eR]SN%`/C6\U/e+/V;Ppm-oOS>]a$վ., ]p++GRїSz oYb5X?b\7Iᘔ2!d㤋(Sh[{XVd\Fq&$7Ba8Њg{+(6m2\cDZBofRQhy2)[02obwE &܂I/[Íb}"/LϢviőf7WWp^R$Τ$m EC8}8ȶr3s!hTF<`dzEJs5Gm{M6Ӧzԗ=i주gu,9i(E*tˈ=:k?lMi.2ftMk=ݿĕjc.u/КV8"zL8Z?b8zT.cݮmMC% Z)`vz~7iYwfI=~Un g5cX5 EolY~.^L֊J΃LzhAnň6#?ǎ!.V{n݆4OhǏfzk>Q9v!z[>OfwbMvȜhUk滛C[&ZZu AwVw_>3j9¯upqyow\R6S~-/P|Ku0 s0_Hգ6ć!h,{c va<@fm?.b>쁙:L}9V-2$Έ:J S34)uX&Yb$CȲ)c Y)ձ<2O<# 0lPx5g`r㟚euWfq8]'@ץ_ߺg5߼~[~[8B]0;o =| L^pxqV eFL9)7 wϴ[A՞s5,;zb(Ңµ:TΦ!ж7/CmKE7.Y[nM,kԒy2m`E6ʨ:s8 &AT`Z% g "*&ѹIB>); j9+G99>h@y\$P#p*(cF x{>M`㤬F-6 G46^[qq+.0xDf% u@M?$}mMmPQe-^(]"!V`CkD`aa")Rm9Y܋$mR#Jb]eJ~,[?*bqg唊Aד*o~o巍ywjQw_^MmnT:[- !Z\ H?'v€6Qh20rE0G$Qb:zngClnT=yArnDf_ 7RCepZ9 .\}wW᪻9|*ӻOQt!m5)H:g/ >OC؍(4uѓ: N^s7UMcA4Mu9m]ݿK1_Ky"o8(_#X&V߹# E #Ђl <'gaLC0 0!(Qrr* % *e! =< {yIm`%TZq2% bN,#ɸN2nxvfw&[6:hmo.1}J)g3eLн7wZiPy$13(TIR|P}|ugs7 306m~V.fK2B, )[>!<]&pt?!\pP+@:X;NPRZjQZĔ'n P6O&ncj%_Xjɷ#D666Z $g Kkv&_aB;\-ߥ+ wN9ǨG-tjFi-!!"CV7嵒X ȸFe_.igIt|oŁ,XYOuy pc$?N3Br=Bz'3qҳEA3}nYN~q#" 09-_,.Ζo&s!zHXͽRhGvm*Sߊxk87CQ^Izmd~60W:T?B}Ɯ;>pjիmoɆwwzTԑO7;ibV^d bIzG˭1%s9 6ьˊsvF_87kG]^ LbYX$)t|h:/hG:y.䔃j_ojuyXY_ ((<.r[mUbl^Sqhy?w]\leIo\@N6׏lqdؙGZ CgRV\5LP+b>^XɗbRq[FU$^,;B:v tkW9sw76uce`(*O'Sގ'F߁^}bkY8t__t(!)![PΖfUO7QLMqagBby-Rd*.;Nno!#xCG}|aY+|llX4u7:{{= oF0p-.w{]4hw@O]OtSzOmZ v_ U S֤/`͢=»^he^ؐb%i|Q/W{Vc~% PlK(F0P 3XL'ca ˂\w㔫7`!)23!g)j㐓42T:!hcD(ݻ[ ldF3y7ztk- fW 2µ( U(#kS{|PnV#²GB~pxL_Ie!J8"c' A^i_)Zw_L= PbqLYoK:jεװ<woدuv?w>(D@~\zR1zg@65E 4xgЭTq6R? NSDEђPv\ZcW팂UUmVo~8 g/ 8E!%ȆJ0)#͜f9|pc8N SJ ӛQRZTAw5E-юhWjW[KAP &"f9 $3j%IQm9<"a>Zΰh 7 '6F?MSH[ h 3⯾Gma ,lE?bdXxL,(6Gd1LW#*Ѧb>h H{koo_nxGVilgM|[0̋;MFrNt}6ɼd;uu}NSC<6u_hu'(R$J10_(NE-0NT\~\<S?hc+]A7`%t/{Ksj( z4vg]tɍ8kW)-35QP*zdO]~"A宄ibY1zIۛEׇU؝D-jY7]$E]`WI`ԶQf$eک߱++eշ9N*(ۖ<׬ "4~Vi M8ISA.8ʼnjhs[wwoU߰; )0ݟ{$(;_y Q3,Hl>@ N;3-&`ӡ7d9`(X6RH"ypI%W{,~g fp?<܏%]wk,Zns ؘx-7-0y޼ hA`;7 |`ߢ naV9 &  :;F!_j)m&m}1JU3}AD ܴ?~(1xT*(G6w5j˼hsw$V3r]ԗ4x=WD6>GbjcEMSqÉNߞ߉t޴ώGjڽ(\xHz2E3vyՓ v:Lģ-4L`q.ֿy-7YhTjB.m#4y_Y8V}/ ݍVu55 O`/% ݗmlYm蹼 6\o3!0j%6 yP^t.& (;9Wm7a[(V7gSo PYTQ{9Rec47U[7=8NE- z\\5\/Fea(RM^{k;ּaęt}8B1֖owp'K};+ܵ}}DXID\};=DŽ>rhܶNkcC}U໖oq%.w n:v{kcwYQʥ~qE㷍>@2;`/X_R0ߊw7Vƌ|,lYc|r/yusϳۡM?%F^_371rnoz4sŇcfxwLRp?nS?Ă/ 47+0j2/d/⟂7Oޤ2 +Vs)3١uwss}8&RblTH95NH7+e9XCM"U,:pe\'t>Gv} -LDd8@`:OoGJjSȦ.½֗`ۀTwK5{Ixr?.WI/LrKF\2ųY&`yӏH˳Ukl-كmyo.&/-b;=ގ CzG`wVx!f>bn|8qm2*Cf%BeURr kz*?10󩳃|ȟwcN<>u0 χŴ}9A%bHc> ..p$JPreztr;ss:R2vf^."sx;} a* 3ul[`t#&1mBc߈tĢtءlɛ͍#:[}DvVnl蒺ip( aj&8CDFVؑXYjtJ&J(%IlmRǵ2}ɶHd?ϽQ2h+Lj5,|(n(k2'L3-׺)VhL2>$opi'ŎgMFwnJABD(  ƈCHœ\;0a"$y=Įp K8'{Œ_@)j7nӾ1|Ѝ0#B6DigQX,TB1OhQc%4I|EDnZcZ|h̪0[;h~놃E`c.q1.@eo͎l8;.tsuX+ ]zpaOf0<6'Z Uu%M}Ɂk^z=JڧiDd`Էɣ`YH<{rQ(6컢W^г3`j[_<#{i:f^",aNC/@߿g^߿/-Mg'Z'I|`" UCVU)>2%J0 abd(1b):Tt--IdrS !2l|~6'gހE~p칞kx|4&8to2-Hr !h͡pu'F1X^zUuD@ZbB w86a%kґpy{ .yh YjeiuNAĆZQLćc$0B,#2f`9$2 ;i.")DDZd'gL} @'k8%I!A:9eT2'HM"A$T* ((E03 [XXDHT1ĆkBiT10`EkQZZX(d%N:`-hL*)YtFT!ҟ:OSA?uП:Yy4X0#dY O AQAf `e|g7 ӕꝿwt2-$Èb?UBdPTcʥF!3挰S* e"!]E\D*v fKHS!"d|fLjI A: ][o9+yY;aNvv;/] ^mȲG3-Z$%jY0@&Wb}0Tcv]C]9U'R Gv5Bި/\ K\czr9;-93)"R(c@BQz4 "`c!k" ]8D.ګ՗ʔ$Q(tVhdeJ#e$Z\~3iҀr)Yr G0B̠ʢ,YmM )MDz=a yPl0h(P}N !甪 Rđ jE"8 l4֛N2H(m'`)(!2/]J|/ SgZqfG[Į9[xfu9NtDP賓?k#o֩|S1ƋyIg8g VF3hefؗ{9դ-#m|# ' )HbLb-3DR&WRzl$ bUT}!N /[E[|69%:Fo3THY"!(Gf[tLvLu-0!;zzSPculˬx'r}whp~Hb}{+^x xPJpi!Sp!ă6\0(3Y)<k1Vͭ[ 3ɴ/σ>ByL[kxfwtt;s `ˊ9IZ$|BD 9N39Y؜)ijQ)b%+/ |QUc9[2AowZ8fd[=52%V'5.F3bv00DIr)[9ELATC ؿ& y4Y挟|9̼FZ GTf;ǤGa ^ HQR/$DE(Ķ)c$pqw`p,8eT!2i4..hCS0$!xg17x.{b+|\ lza+l7օ}gκpknvw+6ͷ7= 0|-K-'\|MBa]-X*8 iZ(E #$(X H1?-:_ xçI?bPgg"o i֒'v`&FB$0BzR-*BT\f BmwOsZPpkZȚ(s^14(\C`:g$:$K@`WDpBL"GLn[])zXϤ Aָ q&P)p/%;&( kHH^lLLíꃄv/1ʜsl%zỌ^X0d|PB>N} ?^rZetg_}Q@y{MT-?5zIWM?sOaƢ8V!G&[\Q@w$.v@nre'x_&(9wWqhP5nL>yDg#1I퉆Q:n6udxts%(ꛔ_s{bhX۳_{6e췦~{/^(J Gl^W94BeCߞPnQM+_{KcjMρN}zƏOͅpMgbqgpps;[nv!Tݏ~yz˨}DO0~I7 MwmaEaJ辶O8y &zq} 6Vg:_,Q|r?Iݧ^6}zE{uin៟YǏ鯳:*7m z+f!^}st?MuWlBItvnQIFǟ>]SO F=1tgq +$pS%b Y^3ʏ7:& r{ s8 ݫ7rx'*pu++wV镳-F>ʜ:ͳ%Ϻҷ~)ՅGAa U((pYSyp(Қqr^ Nk 9n]Z|.A5 HKAe_S=.h@sE8e[уC2 %au *n)&NtR]=S*Z*Mo'&_' &LS~'=Fsl^ D-0pBF%+ae KRiH*bJBYϣFe x"p/.$GddkEXѱ 1(&)MyЂH.8"G8˄VS k ZM+V4퐚vvB3MNU O>y[u8|}ig(D?WzWR<v2 *c*!\['/Ȩ9ee2حb$O={J=.!&ji."Ÿ3&(/|tTȬWyWGŷ-'2K~łz-iq<8o~KG:o ,x,]%8zK鯽Z}ucBW B(-gR0?E#Cg&E{# 9@w!-PpvS#` `$3R|ʭFp +}dn֐3f$Ւ(צb<5XzG\ {3)ӛ224TB !.Hꝫh)* +w>_du X;C`PsZŘ~Q[26.R Rq` M~gp9 /54H'DB&|V7`*GHE'rX21#LuQ)/s+9,@\`38Jkhv0D锨P%1y gudR.qg2IK9LE]LѸnSAzʹ-V6 Ea(lQ6Fq@T@-lQ6FQ( Ea(l8eqʤ6⋽AעzWQ2BFQ( E!8m2/{Ċ*IJ 1:].hhX{CZ&PVJ&yGy6򺢘M}M0N-S F)L\6 F(vnm+$nhZѴBQ6 F( .(C,t +ޔpЄ0nHC(@pS=g͈1*0kWnwUv XXqkae-PiJ 8zp)#ZkZ$ Z3ɒK EYڦNJ;:EzMʛ{ٙa y!ujH]F(xBJ! ^m'\8G.҅ʿ1DBPBPH B!]( t@ 1҅ B!]( t.҅BpPOÄѤ_!h8I[E'IpDT@h$}q)0X}>=B  sT\Z\Rȩ2z&{4JyR$%`;a=)đ}-td/d5'nT מ޷#w1R;h]8'}bz<_11os9ӱsM\8a`*T[9EJ CRvPM;hAJ=/.H{uGtyncoK"fL1F 񆕂 FM)c (p)J&y.I2F'Np g:24v {κcϹDn'_ZmZN[Jx?<6i֍^4$%Y%ڢc'gD9)!xfA_iJ>\f6 Y3 , cIϙs{ +ar3{hTCj"`-e↜牔@x*0 + Vc  `N N}+=FĽ0"`g8;q=Gl\ !!n9mINP4m9!R#cF+tC2qAD [Bj:ߩ3>:setaBЁScΝ9r:.}AF|[vOjς*3Cj\-N_<[B0Oc˷~KW$@E$yYrevO*Vh5^9`JG!&N`J/E<0$V`)")]JA)guPճS6:"3Cfr.-o.uװn{Qe0|r]b`^LMd"Eܶ''hz#@JQʡ2\϶j oOMpW˩ W!bF&ϓsZ'ebU{u|b1~8]W̶ FhB4UTvI'7y9[.Џ3 FM`H3#IOaB+`kry dԦ2_B-Ai$>?Of6t(|2ɓ6IyұSزӨn)=O,>†oקun/;aP@K\Jk4|u8G wqb+ I^:}U_߾xuq }{B@Cp /oڃ^cx5hXЬYW r pބ˩\aB91Œ4*%|¯I\Clv;C/ ?l$iA@JZ;RgsfE,2=C%L(qZHކ|~J#&cuMDZ_p^ǜǻmB61Fs8eXGM4p|}މtG1͋.bUNufmʁGL2PRB88:uxD/"raHƴɰ j:*#O˔QDPSo %\9QHe k]0͜5Y1c9e)gFSj{:NK/sUh驸RWUbA잮+)ܾʫsh~ѽNMPvwz ']͹ η~רy>Xiz8(E ;Hg Zqis|zO@N$|5v'!FښeƦu$[?"v}f+3%E&/qa$NFeD23S2MN tD0⍢+Eʘ. oۯ/*y>` R`:V˧VI.oٜB3v[~2+_ˮjYeG9͓+vr ۏI NvczjkHH Zw6(x*@ۑ.?]FH[(HG|<ܰ@WG&#Q|fؙ7SJ^] J N"ky`}~I.x\.AqrOm~q1-.,s2.`ES) xX?,suDZa<NJĶ̪^CU[egƵRM e]ŝq[UiMXAtj_24\w UсhQሷ#Io.vbʕexd)YH o BZU(ג:O1`g2&~"瑇.0`??ƽ܌p<<7Uv>)d u{ bf-w "gQ2o7*5`R,e%re*Q_Py<"!Mbt]5\hυ'IRR:͇J[T RQGL&t*X>Օ[ #__mz}N@};#5-FH=cWa{Cƣ$0ؒ6\"\ 4dK!,mW(F1[ "8~ J|؞=z'U.ʌO&NHC/O֛_Z!ykn*o=jo$5jr^Kg&v8$q3RgDoҫ,ɠ̽55NCɥ% P&d7)E3Mvs2}2ĺޓlY+_MbDWO+nӉcKڇ̂lIݴBgit 5wmS@r0ս]Dꑙy^O J"E59C9˯iijo|$!"\``;LR (S(֭r9 5\scAb*# 佷zN1,a6gxsR) 2::IYjtU (Mf؁aϙs{ +ar!$| fu:@D O4jWD2+s˕fj%1@!xAy{ObpwLjPյdO6oX۠O2NgkE9aZ+G0H$y!|pvDGA #.Ԇ6$'yPN(WԶ1D#:b!{y-!禷g,}tþe1 ʧƜ;E-s4# +-u\(Poz['\5g!x5Sc'/~Q-܆zp[D0F|k=rbåF[*5$o;KN~Iu7A0%ڣ'b0t"vl/$n.)")]JA)guPճS6:"3I^kXgˋA~<ΗkjyZGN&2"n[_CpҟRƈTbXVTp9q#pjTbɍ9Uye _b~Upml#i0·˫rm^횓(Oo/O; &H5Bb|;Q FF0rDxӤ R|2^ ]`Fɩe$h7zMPʈR| |M#0y2E9D`}ɤLS]N?KFۆn)ܞ$K]<~'@fj_:qs Qβ*=9>g-!Ϳ!I/R^CeHRO `ZZ01Axygm2)(Z:E[#M zuHnϛ č(+)W1&$.7=:DrS~߀;H|3<":d{2A>$'52y$d)6>mJ&PcpruP!"e`t͇Ec7pv27d\{TFN}B#~^09KwhpK[]lvQfƏtwonvI|[ơ5CA`z>e߬2\~j.XpRu"ֵ]lv>j1g-/<rfwl~ݶ_<6PCٷ]-9iΦ?{t׭2F,o+`W l+kJl/Znm~$QN"_V̧L&dOJwLnLl="VSGKr&[H>7`ixs~v3$B=?K]4^w<Z/Y6ȴw91&%dBʅV#'d|u{c?|[Z},]Hhѿ=38rwJ`>Jq؟͛V<'Ac/⹻A5J>Β}p(a8MW  *M2W^~p7@vi{ɷ N^2&E+'($Q)yPbRURqaDPT1|:3u8-zŸ9+ ٨AQ1k&-Lg< n |J2"FC@ƒRb-d1q .Dx&KŬ`Q׽0z߉WifYM7'7TĕAH!{=ӲAkdN0͔ Q1xΒzVodA;;)-"󗻥ַV>};{Dy`äƟ;3\ݓax/=g̞1{<cX=vgfY$aiyY1uګɪt۠RK>Y,mKȸCmLe/NIʭ{I'vDeMۡxLyfTxy!cq{K0Ӆc<L[[3<6:|ܱ@;Őp&a>vHoc| Y=q&L;e҂΃(Nz\ծL͊!8w5wǼ'|_v<xaX$#krk 1J 8?vl+o^S&ieG.qRX^ZZ፪%rֱ$'NgH&U"zJ5\s))\Zύ$)Nژ$J")!gSG Ĝi OO7Ll}c^y~ltk<֜Ú[ } ֫qB,_yK֥VYY.W5*k*FDIEBJfv=;)C7ppOV}{#O`Y K}22{ ѩ4|l]Vg9el϶-Z"~ ɁFEC2RI.R3HR^qL<%H|y儞sh{_ I'[Ǜqݐ7ʎdQ=pw[&Ӵb7j%1nQiJ 7U.P8ۗsoEȏ' %@,,P /~|od,ק̺뻙 +Dt ͅ2۳2&31y;VUҗ~o# N%!oSBXJ9ВdԼ/ڳ8g"xigVF;@r`F f `rh9kF,Qq\+RZ4)DH?ygLP^*YW{өj_Ь~Vj>77¦A_Ŭ _ґmU=:::2R񃡣rjy?8  Bp-gR0G=8:.8hByILpG0B'-%aQhϊU$+I4y `Yp~8՝f᫗4Sk3a7-`1@$JB VP6B"g"ۢ7bдpG^.7- 8K(vYo{H M0ǸhϭS5ƭJEu;p8;W%Y%TFKQyfe4LX9o#\CKq#W`"0}ȭ+"wG?'EmR jt HmT=:6QtLGqcROQtB$/>OcL%SLHTN:K&tS LIX]kY6*JJkh"tJTC<g퉳:) nUc_4?ϲ7c~tT^l(o C#A= G~!(GHKS"ǓH$ye J8"*dThԾq}SH[0Xc)9*r.Tk.)^HF= a< PU0I آ;+ qlaE ]-I7ךC7čN-]Fj+zٞkъq'v|HSzx,#_+\ (I%ZTVRl` Ş5&D3u^灒Q趏2OsyAČ)TVcRq!ްR~Q"E| !.dԚ)c$pqw`p,Q. IQ M݃( gǴs:?!=W }Ky+!{{.2^J;D{.ZPΛ%D+cH*S.8c):JQ@FHQ$ZbkLsIZ Ms:;E5Oܐ82?yi-yBnUYHKzR-2BThN" N==kByD(سvr=̺Ĭ^14(!Q03X咜>@`Q@"8!&A]OG!RiF"Km6hBN8@-1 ")H#)*"DTv7oq3|Aָ-hJ{,I1A@+P;\ `R Xn,W#mM9`3=kzRA:x7@.:ݼK;&?AOA8%~co?tx8V Z3 hdAC9ag7VStlz,LޛFl'ol̙ɥDC\?\*7 E"SPS~?ma`q^O~pck!R?Qf-fxf)@@QJ8j\v9k 5UXMPO\$Ƅ}4Axv1⯵_͝^/5QF<ŕc8NfmyX'`|=y7_Q'IΞ\UW7 ,d׵|̫xt?[CëA22$z횫`r4od%߮'>7}74b㜞uӱQCļ\r?SϾݿ%h' D<0zrU>SߏGE͆*ߚ)\" ŏן.ep/>=?{>u ɽ{ Q]/=f]st-Jκ]]%V\?P {ވH7>7P1u&/}>}^dbD PKzab,$J$o h/QY0Qz#饽 3R<3pⅢ%h4ddW)4As e̥Aa!;)cĚDd)3w7{ogS hb@vv3,n?ڭmpxv~oZjw;V&>~8Ky&~R w\/D(5pmDOR h!xD}SM3ys=Dq ǢE#gB[3N:fA{˔WJ#\דdxfNyQ6\' b 8H\NGE$ !!oPV~9~SypR:+_ $IBO;@W+ˀ]7f=hA!%,)e(r=X̨8 %9W)'d>k#6`]H $P0E^0 ʆ.h_Cɓ41(4(ZHZ: Bꬢ&3 c0p͹@11+g'07>8B5gb4D)ҹdn5IF)9A=} YNgu2vF26nw!{Yl\Q.zpΘ%NDP4Y% lU)jstDTAeFµoWnpخȻR#s08Z#U]4: fniM;_7 /jF 1VttNY*!ωupe;ӺlJ{o|Q*dVe&&򑒒6(CB NzZFf\V G* l!\} L; iny冀kΟK|nΣ/\0~7Ɇ*O#q%2VF[Щji`Ïi&Odw gؘn? WI{[6ŏX3{_G~i/6wq_FWONvfG1~N4Uí?vyz/Z\=0r;?G-ϛ'^>oOwPlGTCK pO :gĤ}Q<$BTiY0NYMJkY, I " ˥h@j ,Yx[Ѧ̳q*/%Z fD%8HaD5Z͍> =ooܷ+Ƙ”B&ʛ@> - }g\-<+qP F77ղ5>VO}u|j܏|nSoRMʏ^陎籱-@.&L@r,!&tiA J0>3ۛ+hDvkgO0+/567+]Chnx:,Rǔ`f6KHH+U^nW)J=pErS |' B-oӄpXݯP:=nC"Z4bn`v[5a=OU90׏[wrY.kI gI>`uBB ,.R&j^^6LXz_Ն)tפɤ ag5O%kۆ+2 tI9ix^:xYaӤޖXGs*B q푧И-rˣ[N =Jg p?jDllyJM,rJq#ez0Om&S6iy=hgfi*-26I^lh\o7<Ϲ]͠Ip[)36_^w'mNuҦl>9ɗBBgE r}?wai8_Gsey\̣yY Կs0C8q:dVOg45 Ev$x+G@rcTZ:a6|:эP|mk{ .jUM^TJےxC`Vyi6eIF2) ȳN64dLk9W3\l*I7+xx,oÁ=[RM&e't@AiNh $ AHQ&j.LxY8s̢m}C6>8(%cʁkhH 5ZrN O,E[*X'coH9L pA"H!h5jn$ 0Qm؆l'iuvLIh ;P8""vvpŴu*er(VJ<R r張SU@+XFv$8C\&Dlvk): X+e\brA`^"j4:x=ۜG8 sȸY$dC\&"lur*pP`C;^mMg2g+I>ׂe}Ar JA2]56kj;sp.}':b8vDHK^{48LFke&p^VA _i΄Щԁ! dQjB$m5F(C"`"%d6J1`DXk+ZI'˦[E-_Ӛ+^_?{mL3|&lf s-"@q + x5XS9eI+;mExFf5Ӱ!]fqcng .y9kmߜȿ!=-7C"1swXMFjz:U*rHX{sZؑPj.TTZP'1 pcROQYr媛QߴTF(e*+UAGK& S L.]KY5,Fbhm&LG\$ӹ42Ypk%FqF{kf5w>cmf_-?אma;w"zS3Q0^GC^CZ/VzS)ux4là ˯Ci ]44K+8N;4:fҬiq+]1ӥht؄%?>xaֺ4xxmMxӟ!Z˧;hUJu͎iA;+ *R2iPrc4zϧZd{ oiʫR[([4Ot}bDZpzoP6ԋJg]CV=Rltpϋπ[ oiqc?&ctVW|[+G?GAv7[)$/Zɕ/X-.+%foWoުlLMWgojT0XlOnd3DrQ| l_@ rxC e\4\ kωI6-厗J}y-MhhRplR(nI-Y$5Zem=ב{ tںH>5 C5ܷ2۩4~|[__ͯ3agE<2x  +ɥHR4(`rVi^6DEɒ  R2 e|I$EiC6,i_M#6WB"g}Pkg Dۜm ΖHksqXlड़)MV$B2W2Rh)$$v ]SHZ0kBhNL3F(rN.V"Cr 3mSќo4ޅiÒy2Z#"HK~ȇA 0 2FPODyDNsElI۱ecv}>Ωsiv=N6JÛTw̹ca"1 ޅKD#N{'"0J=B2HLx`҈"LEzyJ{"Tji/Nz$Y2d]Ɵ>(97'MUEzr7NTR#UD $Y1ۢDwIGo?Foa|8 Yfh4V\PRìrL|0kچ01H ܡ]-zDw !"k!0ƛC G^-J)h+j(Xe ,Äc*(Op(v5Ok=ȗXP 5'@i\@2QӪAUQ Y06ic͛hy+L} kIt < o 7|t`u~V$ 8ՀQh%᫈;+Qyn&k)' O6VE`v+ [ߊ:YHև\dPe} \{#BKzt)}N߃bVnB%| \:&$ճR"T2{{(%g6ܖS5Y$a @AK^ Bkڝ2h ΊmB[viXp|3PEȚ\\Ƣ(άI`&#EP]  V;#2Ϊ ת ¬0,,H1dlD(! c#(jS,КϽ'tXI+,Iƒ5$RYIe6HVӥGoU4"ZF7+mU(am 8Y-J =UK F{ns|z97;w1zyj눙$u @z qt*D6IҢJ$v;Nd2Y]ۆaMEѳT$$QzhƤ 0#bp -ȡmM1WTs\@7""Uw[hg JTEJԃ ( hTU")RCc뛷] + EISL1. xrSpM9%E bZ0p0) p#X("#NW=3WQYj0Q380)`cFj҂cZ b $_T4/Q^jZv!;tsrzdA^ Qђ?샩j-*awU@`q|YVTFZ6Z4bԋ@+Bb|~)Q32`p(g +Q` %׆!Inf57 !ⴱކ`ݬ>jE#WnD|9L,<Ҭ6F'dS((`jI*\ mi\z!wrt 0@5[h#~q7_\\)E6f0ޔ!$A{r22햳ujnF(ECQ 7u] )ӵo$T7qo6 4IU#E{JJ Q`gҴ\JA ;+[yB~e!_YvxS'muҬf4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':iINuҬf4Y':W'mjSISڧ&<4uI#!:&@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X *?%% (`n|:J T$?{%T@_iJV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+^%BڧV?%u(N X@_ͩC{S?No5QI< n@oP,"?^](yJ%zYwuu!ӒK$6s;Nj::/?D|[)=+ ӟ>,9:p ~.^m~ ϣRG ^荽\^?ЈFss;w\\@DM*Rr`Mv pL;T΢=Hʋ? ՌW +toٻOl=Vxb;E4d3'WpG;TGčGRֽ{jWW+T_Ք8>ok9}c:q[ŝ{āv?ƫty /^c)Wwu+(뽂GBϿRnK<+pV/wVgoM#zx+O#6In/x%Tt)*TFk[kAjvaAs[%.`nLJ~CLnUludp鏺Җct;ptw:N;ptw:N;ptw:N;ptw:N;ptw:N;ptW{lͯ+j\Pwi7D!= 08ӘFd3D7f*pc185f| ,][rʋa;*mÚYACʒUSѢ0}ܠ4T*x(M'֋˫j ѭ;,wKmU_^,WWgS„۫Ma7xvs؊sPL>%]/WF=~~_7Owr\$ث볳i[VƳ:֏i[-pz_#WI-wAӭ"ީͯ׫ߓ>ihǂwXܮߞuZ^QO8ps;?>M=ǻ5~ޛ4{eRZOӚ/8zn R)Gxfcz16 |Le{3-uhlsĢ)o1՗&颫Yo)Ԝ6VZ?n2&EtԆ5=YMkAWY&= @\7ᮇ; X}MVJ˄t|ssAzaGy >ԋ0Xjd\cb?YkOVz_bpvfZWA4)I.B9ai=k=6da A;mK´"ڡ-?ؾ{}S!Nl 3E!=:O{$>2`m+F/L)UADlIҬlPv6sd =_HOSa7a*L,frKՑtH*w$omκܐVЫt-{$]Q)~4í7Dm0h㮤 $}(DB8f`RLC6 5*Pmbl~<(N{#~oz_5'<#̻{(:{|juW q Y#}Б$wZAYQ]fDɳ0JD{Ldɲ (#XM#P?t9| +?OiBgϏ'{vQ|w?wQ.&i4\V,e;YSk-teH%k2>sAɽY8:y[͂^ ֽ`}W]d#+0"5hAbE7PDF]E YںUaoJcIbRVN4|֕Q2XđFޜ58 HoTZCaht15Qg@edJq;v;^ r*/hUcL6֛h[8c1<}k}˳Y<[A$it,m4-L :z)SQZy>h}x~=vgp#b1Th95Ph}>rgn,k翑/{_"a^t/iL"}wYG NJQGQg7#ڨR[lF8Eh(7eŝ{ڞ؜NKTv}u}`vͭ[ߗ~|~[ovohXjܺ=o7gmbJNL^W:w.Q*~:f׷\/0=J?{WƑ /{ Xb r7Y 8/*LQZRrlߟᐢ(9$P ,Nw?d]۞5k5^u2\Ffؿz3qiiAR N:M[LZCPD>ןpRonoڳ0. )@*i}ekt32ӁkS8Ggnx{B'P>{ˇǟ>o?a>w>r\B@$axr񁷆oMrkr_[|k_E~O1ӅiFo ׽[ _llý=~N!]ҁH#00.2'b?uhq 9R"jojR6̵KU>EݽPN{x#JiaBrZ;3NѿCvzZjN[QK+Lk쪂_: ~ISe$>DFѣ5C='QIy1='o-thk3`*WLpYYQ%)٬ Lsf4tߖa/Ov- Ӏ%A`j&j-1'8epS HV=Gz6r{,[w]Z,ְAB1XPpH *XýhMԝij򁔟`II EFآ?ape&Z1zdU ^MN;ہIrU`=]"kE(:75 X 2fȁa.xkr+DQ$9Y{jX[CrzxzC8H" ]$G$4k:RjS*AZaLG/cFZ|kf|ZxJbEߣ,o:%"u}?xOsTQj\AGƑ`N)e+B8۠.T)#YH$(K,bG9A9ţ ySe!(P24EuHj=LG eZ30{-#_LFSfgO<{|n|7V' ʂY]%omĥXY_0K~Y*5\Xϙf2u w.X]Zg"76qC̬nM26Ofo bZV0la9-.l}4WFV0jOޠd;̼4r3?Lnn679N]xfn؛p=\e7 ,Xs]O cc"ץ3mCԗ=⛟7 C4e'-Oò6e7j9:MˎT=!}_=FJ%o 6-j,mbE *xo+&#K-Xܓt2H7F7iekb@5|Vٓӗ@&Cݕik.gSK*Ljd<)V2""&ZH0<)c";ķKWђZ@[h}ϟ3ږf3`&ۻQTaڪlbz^;8Aq\qhYeEnRnd =!"}i2JGz)) JnHb΂wDðq$ܤ Ȫw.-殺Zډ'тO)Jjy_A1|Z?rYE2Hf"ϺHh\]XPμTZڬ9b}k}5+7Cbu.cEMˉ7S$CleȨ`TTi ԁ PԒ@1*ڊ`cLnsbr\oOd^d#&s@"g^׹@+B z .A ŒaAVm^Z Ķʌ`]¤b1洨RbK*o+Tx!V|Kxs\S1G EέRj'( -J))$qwǴ?{l|2ڙ;k}x콏5'{lͭU}mw"CO1$el]T7!{oŕ.F>csRƊ`g*1B>FD=1Vb)mϰb*߆{Q-Pؠ\RP8"Ĩ)C\/)'܉4xa,s"a^ژ5y<svUJ#)j#,`Jú4ZB+99_&Gf~S+hr de \J1O 8vڡ+L ̲q+y!eGtKPVy wWː A`RGTQJ-56>VJ :q[cR;Px^i>$#/g)Oc.*5A1VR1q?wqv௥S{RmiH7r2r&fiw>`N%7E[*:b98zlrn7K5ԣzkָ0FYb6; )U΃9oRV":$ YsybrvJ|5v=/l3ُ6ef>u>I"MU&B}V7@q1xλ̥lt޵v[vyMd9lg^RɴW$lwJ.z$[&n*~=X_~_iY~yXo[gkKJ\OZQ`tY"7|܌K7$=CSW em\#E0`N0W<"-ruqb*BB*2oL-,uʱtQ;mNx;v:.t3p|-'-/n?\Bf1܄,={}},qo<v'3FfIjksUu¸uL.&1;bALgl 82:S;yM::)׾Wr=4 ?T(/ʊ oI+~R$ GA%"nL*'"N%bΥӀE1&6~?919݀ .7o*o{KfSuU+B?^+C<#L.Zz +8J5Led<7CDow^<|`+%klScϳٮG{vVfjIFxj4;RrKE5B 0vL W-eYMp$\m+ MBBe.E%CW +?}_kӥn=5R<滘r g`FUÔQBb{+g+TL Qo!?-2& |ʍpBW8];"E RrA(`QiƝVc  B' Llw{AܮX30n]y k  {{H`Dzm5x(5 ^SHOHښ⎃f J C4Xq#1.U)  #=Qm7=`|j̹W1oP0q Bs;:[}NQī[Ÿ~w7btV0F/`/ŷ!隸oSt~82cos?{WFJ_zE坑l7= 3 1EImy}"$%(RLU%/2"#YTqCUoC S Nć"ϾF9P@E L?ֹ!K8wh]3#Ğ)f[G'$:QNjO4ůyw-[d* 9|k`C |V\|ōHUZgM뫸ЫW뛋G%E)(廎U phAaC?it0ja](iZGkuTɛ?,QF,ŵ^j2,v鐨~WbOn/kI- uͰhXfyDbP'NaNܯXJJF\JA s@lP(\D⸓"1F1o:uNʟ=?]CTc9SZ{K+pUt%I$3J;,$]ݕ\ުO{Mn8.^yސ&"AVn<6ux"#9`ws z]{zq*^ۥqA"r m8QDxM̃e+ X.X'+(DsSUjõqP(I!FtEYcda lnl]v8uOmV3W4*ڋĩHxr6)&6iϳsR~Zӡ\"#2gPGPV"`J,Adڱ߷,ĦQI SHgcW\!lM` T[gdftպ]%qNes$"QPHe*F"! ZePosx@SNZ;!ƉsNZ{щjjVfaXk9J9qk6 W(qi*˫M86zu*J,AERY]@h fPII$iz HOe).46UJkNxcE.5zbrKGΣS>Rv;y6*b ه~_MgՉ|/WWx(X\ 5 \gDΔƓWٔҦ@#SFYTc\XFz $Kh,n'i ASN+q)arN ;[jSoJZ _W4K4YV/Im(4Wƞ8"JvЀBcܢ b%n]5qV K%䒟bqiCd q7(\E-f1z>.6#=]N7z%7rsrl:Fo3THY"!(GE&vL5m5+?4r撮()i?CH2 уC2$AQ-R؄7!់(oyƟٚ,ՙvǔOM'RK;<x m`;8|+pJL =tFaT1 pO}o@+e%$QR 󨑘B$^+1#! #&&(XKc-=#3(z 2!5oϙ݊ȌاoOx?/-`)f  u3T`p:ȑa86B"%Fwޞt4U50E\mSi5wPll~sF-'$>vX ~6sw?KO4~%_b|}ń`1k/(~4?䱛FR %ozcDz"+h)J +ce6fp,=>zݜ܍lLg{ .yFz"\CL$]Ԃ' @ @[:pvYw*UۿWX,$ * Pe09xͫNQ6Nr{[ڼ\}_St$9_`\hw(Z,oI6V<;sѠgpjze)?v  iI]6d"_+#Igbeɜb灩nEWpJhi z.}R+D!3A4"$74OY{⬵; Q.|0^Km}䓖5;OMmՐ-;Dr}AS_86ʈ4ESB"SB֛LI`F f<㔳iF,Qq\+RYf;"8>ψl,XMyb'4+C GC;|NG*[tt$tD 3r6l\!sqȽ' {i ]!"(s?GzC^2M;8/A=M8"JΔ2Zl4IjjlmɟEN(9YJh"JCN-`4#fTʳ`B2I ~r#Cw83fAY>(c6JeD?~>ZH10#e 1[Bb-l!f 1[dgdtq6F]#肽O{xktyF@b/ Mx9%I䜻RXJcC*sLgBgi%+UW%l= 결m#)JQ|Ii~5rn/%Rhp b Em|N :d{2A>$'5y$d)6>6jw8Xj#wQ O)ɤ~|RMTՙTnz*W\&E'7vm4 bs`n?2[3KX.]NJo?Rf3V6fU!,,i_j7MW>=QRa8loveŘZi-hw}iYض5gß\ nm2Ojvn&ZH!\/l}~k>Y=Ҡz>ZۯH_?*53T>Vy𥒚\xpJc(#N8m7˟C2}UL(tA~ԭ0h' Weea7t[BR ai-K# ) "cuӫ(Zdr0 +m:ܖ WXE6f^3||kET3\}NR}f"NEXj嶖-e1mz}bpGU5QT/a,.U+M֮Z2[q8% r8ͻ_ ^;0D:,2- " k#z]DD{? YrJG{qG⎋ZۀDL]ۛp%K^QYzTHpdoq&]ɫb2(֏-ﻩlEO&૎OetQk톸8>9n^*7"N] ddtHķlK;.߬(q nng1bN>&_zҎ3= )6f,qhmofKafմ.NO=u54:%O;F|(>7vxmVtF+e$nWvv4>e'dIr{.ETѐ"Y%E$a9FMCRdx7 &fX=uij%4aG|xm:YϮ1@$HCqVڰ@|[mvLJMn[ˀ˼Lhh:4l3;A5+&DbN 8JLP`Pb5޺(q]&ʍ:=#x_ '<=Z42H_'YZ|b I!i|eؿy2S.4zqX_e=x4c8 EW.9̓\dh']Ys9+|؈- GH8b鍘އqO:8%)J#RV˿~C")(r9-U,$PyK~ %cij )crYFC@̅He ̾8?)a},1Jo]vM Ϡ7mfm/\6emwB2;p; m V'o,N&ӍݽvVmm}YD۶LOoz0icrK.NV]cԞӟVtg,̈r$L# %7r=t7S[gNo3I8- U-*/FL9F6*=lYgNf]h:鬯]ܿ8_< (Ɗu'ޖ'-F?cARL3" I 3cVk`" Ș$W8L\'$ݛ909&Lh>n^YQ&?[Uc౫JCQv뫶k6:}KH{z;L>Ap8.k`'e]^ePkıwY+T"ﺬ],},yA-l? ]L0)Vjʃ*zƨd͙A˛@wk mt,sW0:]d=Kd!wn'ABӆ .Jo3.5^KL4(2Z38` 0!G z A*$(nK C ң FC/J:&!L⣱ X31EN-yu2hxQWFr!D/ebv% c}a0tV[6TPdh1q"Z2۲2D"0fJ+2N]Ne8ǨxFg}mCƧJEcYzqOeR[ YyFэ ?>ƽKA99-w~>b% h_e6-:/|ϽK7&^>:hJLj!T]n $|-/E{pƑ@3~d솝]RH37+?x8I^f?Q[9X]`,y^ W@_#Ai8{V>T8z7V"k_M8_ݜ/~T Ԝ3Ir˓m=G(O6K>j33Kɜw&gʯw h lqq9Y[ivI^ ?wƚFҬq$PGjF4cqEfLBOדBef =z6/`GMiԦ@TQG4d }z}ʣF(j㜟tC-Ck抨~_zq0;ʩ.YfAg_]^MQ/LQSTqf8)˽п$!)ɯ|˯?ׯ >}GDe<)DP&#_A^Pǯƛ -ڜt״9q<դ$/ f4#c_M>&/"XZ/t{I$ٔKP$##2Y9x-kS/q!EZp+:ކvjqLF80PF*+@\jGl\a^zYl)Q)dwʡQ;>=}޿}GrcҤK YŃIzՌ)N蠕'$VM8 H#XYJc=X"g9ȈHZNϐTXL0[6y[#g{Nwp^)@3#y&@B*L1yUV]B%6b{QM.x$oI0E(%rʁ.hDIn hZ$^>\a/Pд)>p.))sqPe+RtYEn?[R\uqLGļE\P 2GMRZDDiGv .zX (#ʾvwKs5vu6ݡrKw ;"\ \v;zoʅKZiYwyte-8B0&W qUNb1g&pS:U>ZlYx S;^BvekJ !$XB^A9[ LYM :gLQ;A'!:V{w`r^ܡhOw¥ʓv|tvw =yU_NmS"lW$X3o6R[2*AB2qvȓX9nc\eY,Z]0Y:}>cƈ^#ʁ4"gIwX&OTtSԎ5ss9H[lY GиA]GѷBh7 U>r yg:@'@NPngYauni{ ҬxлLx|4ko?. oQ^@͗RkkIFlh7)z6KrEf1Z$2YB}tOMp?Gy$sS)y%=Y;!aA҇1+2jXQJ|pMBmH4/D`:ɍRIĒo7ahFfM۲:XZSz0> $hohG<9Oצ=V>rc?IuGьJ/;Bb_5S 0%&Ԟ!+?o@&+d+HNV^ Wi΄ߒUll0{k %R9#&󁌛lD< {B(D##*f!F0)ȣ`YkBPRhs㹂$R6fk͘tq޹ jB!߇5+<Anzk5~awf*8qD#2ZȽETHTF,sde'l%o:`C-5tଡ଼0Oƾ1r 0fob] ia4G7NӒJi r 8ԩ*$*Xbe* Tsv9kgf^ގi]2Y[Ima"wN"rij&'~ ڨFFbΣq8OSɓ.ͪy2=eb}4PIt_IT熜zOM\9rW?y7~^M ͫl/i>Ota|<#3BtEm0'ڭ3cR ! ptNjrpc?᪩ OxU޼{C4 4ò|4 EPy@"~-@@*qf+%DJP&aldN`KT!:eF 8˓6BL$9sYF˸VZ426r;C̺zY~Hc1 dՓL˶J;dz d*2O;Y*-&C.R68*+*U^o3hF3!f@fh3V%:Q2d\h.x#)i^|N7,/}DF< $Ѝu7hHhs=|tcg:B.PέqPl"jgo_1с7{7$?i\o{ôC`="?Uh2;km#7/]G|H@(+ >m5jwZ=,k%YZYΉ9 p}z41J*Y{1EftKvhN;o5zRk/9'߽ڂ-}XLF#ɽ,'*6.^[ybgM3z6D.'MԊ?,GG_at9VlU"W @-AU_bG^=avyi%݃Q-r-r-r-r<n t2QRGJRk+JRk+J/MJRk+JҫTZ(rr\$+Jr\$+yPtRPꅀ>` njDZHPF+S0X08 &ȥ8nh` QmSo-mo:抭Ǣhb6p!Ç8Nz-sx@vU2V  ].|wًzWO6wyop}Z;r_ARhX9L%$2pAEa2(Fa*t!A wWZB3STSבFvEdu*) P+!0'AlqONJqUmf^º {AY=$0"\ ZyaAkQP)"6Q?U\[6KUxP4HNJ1ϐA .8@Tv)3{Ge Rҁ1^QǼAZ°cB bWtf,qߒW#j94.cw=37<^KXrѥrQLxj+]av~scЩYU ) *UG;/EkgɐL2|bX!SLk)(] wQA^.gӿ[tiC(XK Ʌ֭*WBك:[ワkV~TJjQw }{9>(4P%0Fvqp[!?!\ͦ6 }0èI="`NuΤyr1¯'?Uߍo>,lqk~c{}3Y[k$U.`BCi$Gb|HMÐajfYT V0btxT2~$Fm+#jR L040`T+Cg;Im\ L":ՓXUu"knٷaJ ?%`'t6;~`t;--;꟏zTp?&S';;!(ɫO>)姫\a>՗>x/R X M#@AC/=CSŶ9zW9q<.rW*D)SH3>PU&pUk!7q}A틵>GEG:c#I.)8(#EDD)[{ Jk6LKQqjgȍmaǠ2F5*KJ1^- 97349RY$*:u0ɾIOu4 hk7bQvvvN>k(7f--M:%0U98KD!r\N\.'HJr$*ɗ 8jᮏV-n01s@rp*M/ K#*Q1"T8B&=L0p{}GZWv- Ӏ%A`j&j-1'8epq@2ҪP0T+NFTj4vCP )Pīd1ܲ9-F f{Ntk[3APPL:<%2R, p&4ZJdV{Iʏ!XN ;II G@\hlQ?a¤ހ4!QSU ^ֱ+rdAq1.uXERLu~U)b*Rm+8IpJ)[qtSd.#ye$0FYb6; )U΃9UK*CQDPvؓ x$ )[X1c2b Dk45D{j-w|2pxt_jX]%=q3V5\rVWޗp[$b7 >~ ]׻]OP7~yIg[r྅ʙY MBN~S0^кSbz}rC6juլ:,jnݾuy͝Y[r=?5M q;ʜ]s_a<5R{[0yaw Kfk HA_O4j_ulq+1{aԨ*o|oT@XW^ȸ R/1",DV1тFQ4`pH~rEc|}ҦTP}3k}z]ƮnֻߺVeʹEŜ{exw@}c^Ӌ]&I ,Rg2DOٸL^JJ8%Rb%0liCndY@;v@w 1'тÀ(K=I7̧HGeF],rY,+$3 f e%A.,XY(Y(YD7vjcI[& v3ӴlimC%=ԹY˝[rgM[rå+N}g_)M”7&>C8n9g32p^=nx`i*pm{E.3h䮓r i{-]ٴ3kz&gAvVÚ{n|S2&?}כnn:  $+2E bv( ,z$11vqD1VYE 62( F刊`)IR/ԀPELۗEf ^^@Tk36[5{v<+ڦ}l+#80? O ӫH'wY6€j` N .+'5"|sLݾU5F aږ7iiNwvN>% y{MW}`av-Eusƍ3zi"bnaVb!Kd TG.yTpUs80$5AZX,@(vFoӿ|0af9ȍ$S{+%Res5r6hUQ厖oU9Sl[Ka~Y7; H 6cDJQ9fx,l\IIA5(˹6Fm(Ō`hlU3 %ZRR>RL@Gm7 ,3AI[#gf\R []u!nY ~J~vy|< g4&^/] ZkD cQlmX0/]lټ!`z&ݘ ^mecI=,],.IK0@XEs`YbS1aEK n3,+\,)nF@P? ) ۔ ^C4F `6 !e m89qy(\7x,=smQv``UYiGRcHy!A&Cl &H&}dTddGu]R*ӘEΘ6@PR p \GN02 |>hRZJiWMa{ [v˟W֜vSJ Vz`%̾G#7&k+k+g(Y{a5V[Yc+kle5V˯5V kle5VV VJcjleJijle5rﱕ(,`mUAfPTmUAfPwX!kw]*JR*JR*J,oYJھT, pAk)X@*&`Au-X,Z2nvdLaNji g͔vRҎ!@suy1=l4,h@X ©d8UaSڐ/)=8 |KipGe|tl6RsN"rĸYd@6jQ!VynblRuEsi _o_OF4i04+W1G[>G߽mtٻm|6|wԷu<{X/lp"~+RkP+EmI\ݷ(` RcڛJEc::5O+ͣT P9gJ%-Ir+oIR%UxToCD׫]נg()HSFkdt*gyFh:g ˈ`JZFX'Y4"צew/V\(cZٿGfyެ=?Mo~lZzT=؟\藯N0_0p W'~q? JpCu?+g?m'=fO56 |22S_~Ⱓ=.SP)ڳw7ģP9Cwg&Cʹt`O)$fFJjVk3vů$e>Y!H`97^@CkCN&:]o$_ _+CC2e{fWJGz揟D1I 3䕏ڊP\NK) 1IxܹuLjd轊6g`2sB)TYn BKzNfZd!Mu,;i2N8B)4g d%. aƔ j+>zّZ YMJv!sNoY\yeW&cP^&K h+ZU3 ?'5'aJNLL :|p4^,3JV,zaRUM Wj'U;iIE0bRdm7ITdbHVdK68o%Rs8P*G~z|ORk;yD I\C PREAB&3XҥM`8Y?v}Ǘ^4 Ev $O4T=g=LaxDa9)u8 d9<#2vPmzÐR:ܨ. i!͢O9]U$J" ƽPljB)"NH)ƩJEMxJY\fseVhK!(wh3V )Z dJFNk-ٮ)y8e'B |Ұmg Lo/sy:U8Ve#.%{?m\o߃r* Ӥ%'/k4gU*GU9zY(qsd1JF%YHޣBȭF)VnEv[cR"Y (,)%VNH \AA(] olW'^|oA)_(Eɗq6,M{{|:TU3Sɀ3!sE[ni*y$\u"c'sde%^wfh3Xœ[s8kZ욕s~[o@J#_Qf4 Bג]U3 Z03+pCN)R#~J2z)_V|0KjMׅ'+)y&Fq<=n^ j0ZSYlEא ߨvfeBQ^RD FR9M0RqPrHfx0!u2@ m$ݬq7b+ Ӧ mݪ?NXtrOh=3*Ռ3^r}2ۦjڃzY:&[B!7@!NA'GU,_/:~0-,fSzȇL2j S,9\X BEF8a) >YzX/_/b}K&_  i^\-m"\ ;^dvnC'~3z~o[%^VA޲RYd{%B9vLeJzX8^crRYHFlꆨ8 P=t/s_fy"Bv !{9c53428rv ]L`WUZ>}=VZ-D5$:,xnbhQz?q҃o;naV$$zcj LdJY0eɚ?Vƣ1G8>9UWt.-K A/\]__ Gm!;fX n{ /x4GB$78uCw2@8MᬙwASh./Fξ$d@r . 815 /.y*"ao4aVfo?',hDRc& W2x%g 0tyk3tzrŒ(Xκƃ&zƨdy"ˢX8zY^(}ZJؖʄW{ݩM{8VG&㺛I:K!eMFN &VN0ZT~jB2 w)q!X`@zC^JtLBYC-wW߾1}T_w܅w㵬7&FȂcF[A@9yΣ7IAf9:s*;cj ͑JB:}Є[̲mD$Q20m$ , QDrDIKU(9J-B➤q)ٛ_b Zx$Cʅ\XQp>LG+#KƔ/0\iBb*(Y"Dʉ$HhImB˜)"0fJ+RJHc|X cT<>{6!r"ѱ,DNFM}Bn8¼ gūQpuzBw,IprNc:/xqKbz_e:.˼S7&`-7 ڒ.WO˳ֶY6p-iGo߽;:XԜ3IRdFђn p.ӿINHoԨ7`mOу_[W?gO~n:aU1m-q ONgg+ٮ8';JfE] +w-#1ع*|$i\iL򠰍c~ٵ}YvV%#jߏi(a eNC%d' *H<ʀhry>s0_wvV5ao-dzǃ08%"31?}ʿ~>~??\@  ?>;K//Ž4XZ"4ﲮsJLe!(>钏t5[{nv7 Bumaqo` #ɦD:')HC3.btfDמ]Ko[;+]}nHV,ЫAoz1^΢8NMߧ(َH#Ӗ #Y@_\If'饣 eZ}8\k=O 4li -[֧8V6\ L؜3zBΐCJ[%=z2ѿ}#w'߭UsN=FNL  ( <ɾk]r[q)~<$VW/,HMfCċKf]dm5 BlZ*d!: 9u$l.ִR+Q"u9ŐHWC0gCalkYj-"IƀQ=cJzS% <%eA=s,GRtxU\*d] IK(dԢMQUK*^~~1Iƹ Őؘ[rpF_ʭjc<&#Dyչ 6KA,TJ V\b3jr-3y]v;z ᣂ+Mtu0l*rdJ! gfJa'fJa}RةfRN)T, ?\u%7 Ao3H/ESҦ;[1# WMm3AǬHdwms* EChW0٠J&Uc6oYR*P17 %yTBm֥W4ߎ뭎*IYGDq==:JEȯGGw߭Ђʺ_?Nߟ}>]i_0 xl C*:_-osokv7c1!MvroQsǰSҾ;J1 C8]-*Q3ߖ/?]CS0)֞T%HSR,h/GZ Qh$K$X$ŹbץR bgX4]?',A/ˊO濟}Y|s?|:Q1qY ,z/ybr9]:?工|M/UY_~k>V.M.} /+wG-I;WeeUo}Z!#>YP߭xOP![{t8ͻ*o(ё=cNarD0$,n2M`#[GGIn l=:iqO]u&1rY|kJ*T-'CqhgƦwxΐӼbJ mhd+6QnN$_Z6)0r6B{sh;_H>Ռ4>q66j#8G-vPhpZ!:н gqN smr6B5)je8(3 zսKo~i{Z-i+Rl/qֵO\boڔCk#X0% E@ 2am`IJ%*Q.4Rp2AT"Vbn& Fydp>eS Nz2"WНğ.j-D|ѽx ~09Yt<)eramqkTX+|sdHȦZ)EE5SJ͂l'/! #&Ho^/= =?٢mk-Y$vPa9(ꋥ^P緢6P~JemV'\&:@Ի"1ʃal|qȥ&wʋߏO d kz8O}Es5eKXt{>U\~յ |qګ?lC97Y ƨ -.EMS?VZK `T0K)Ռb+&DuzjL>ZCMh8GzDmu UPԟS_YM"&ŗ|uz7L|rr+حhT Q HMIJS.uYj>2)^7M#h?.΀&J]*)RW1k39dQlrvZn9qǶZ n3}p:+C,XI"CEgQuM&WIdJzduk٬N}uԮE#m5" ֈ0kY#PV1!#WP"UɄ5`#TY\8Z^Et~Y^ZžA(nWŗa6F,n3ǸDs"َɼMJ'J'7Ci3J掛?ccr׻.4b9SRvqJ25ށ4&={WI{TI* _,0k,{ie`ġQNYH#D}4n"@̨Ӎ>""ښK`04 TjJIEW1Yţ VFGG0ա4~P)komxć=*W|NrM^@7?6 &'1V',sЊLC"@s Gf&I,gorEFY emrjCz5j¦WX6bӸw!\-aOj5tQ15OhQB l9\}}bv=YF2t,eꮒibMcPjXpU\0xg)Rԏ;0V{[RMp1)%s4bmM`mkQJ 'Tx3'~ N{J3Kl!D,j Bzțtʵh0eXZ$92Fq\O%'oOlw7DŽކ""єz5X2vcK&JStִ0K۸')"&|`kSZ+fLPpT&SE&_fp4݂jAmFro )F]br+&h{ӏn]cw؈ B.cmLJH.Tb#zpfXK%-ކN<=^(:}5>Dg.`Ír6cEuw7xiLs}̐3eb%Ȋ}R^J3f"W84WOp֤ܾeC$wV/AwvovߨOpPѝ2KSԢ^ݩt0WFсߞEx/n1Iq01sWTOˢVCVt8@f´g;[̭c!JҹjףoDRzZ~Quztu0Y5x_[yMmroq隄NVug? ƵysjuPreF}܍hޓ}6H^énhyAl}ޖL[> cۄ٤I$X7)t+͘u`uj#0D5υ} LPLVTT֬0q<`'7L_ETB?a? Lś_''lj؜i0q"P Ndmyv;9/yW-_|5Z>-4 b }9"X&XR4b+*uۥ k1{ [@ٙl&"͉Dbal_S =Я$ v,xnr]}aQ1{-n {WR='Qg0r7)R>6BuoM?ۺאHUEd D`X'IFwn{uSHNYֺ2[SI/>].a$3,#MMZ=7cDŴ~r.UV- ?%`'JH<?Jh8-2zyUSP!TXYL\boP>O$/?}/|?.?\&:_;髩bۼ5j÷ylr5{9sLw`!|I|G"A u>O){ >7To( 3gkh3&,(FQq&zMEe4MѰs "I[l3Kͱ#L!Ֆ[{S'8d(#E !I%eIMLc1> iIJ* YED"60$Xt,(xJd`ث` kB5Q4Ik 6Pks,NR|= q\hlQ 2aRk7@M$ rJ"` : }U8:q'gUD* )d4&jci ʝc:<&dHŁ2XJ4HHz v[K-c1U Z;!2f^ͭh[3/ΰz$w@ϨQX)p]zZz#/GR SynUd߽&MnL2JMWEYpew1=zQvimm ֠]`XYEa1 5y2w}x-d=KϷp,1Q̙JnO=M9B-m Z E%:z t`t4:+Gsot6fC]ń02YOn{<y- #x8\3|p}W7LM-+-66?\CU`Psm#5-'.[uxom>Cފ;$K}rkm#(vN]i M6ѹmZԩm ھƠ`6_2|u|tBA(D@Qt+)͘<3һ̂ .-g:_cqGzppn"yn^'j\?(hklˀ5|/K@PKxr;m'OL /'|xˆ^FcD2Y+냉KMP< #(H8H?9_/]97.Ɵf5 wh2ދ0 i 8QKܰoWM% ftI ,Rg2DOILґ^JJ8%Tb%0lino d@;zt:9az>0]M;DZ0bst|^fQ5|҅1yKCR#fK#˺ZxfxbAgI./ Vz5!v(ԓDjjsnp = I&x\j)ArSr+.Ʒ:pYle~Kh~`lLXx}`*ސ۷K4x-~S<wUu վvվػ<\ցb8xr4}ty]dT0**4 Q vm@nL*$"N%bΥ!"ǘԧDϑ.1EOLcS7 qYyIZҝ͠&o-SL z k~AO qܛAOӠn' ͠]3zA&QǏo:ewo@w 3+Bﻅq&\/{e5;x;>^cC)+^Uև.Z] 9q&Kq!`Fw[Z_bF8v?Ŭ77d 2krA0Tipn<zP){pTa ԟ zkHSu/&ӭ} /r"7(MTFo:L#mX\ﲞ$J@|EDk) F8b0ᆫ!ie{#ya`"ZA۠FH"I`t 2#$ :tM<ȰFȀ|E&acN?ƈq!/( ŧ5rEE 㭈QAPHY˽!1*XQ({Bi)1]s) FSa4IGAŚ > Dp4f[ A!`RQHdD#+TT!H%hgK:+@f5Ov c P[SQ(EdV9nIJzF= „reoIIRb[#?5"=$tyQpUSc %M7z!5筪՘&NjglOA(B[+iŦ8$CMH"01U،wqReNe˞3v8oր0&wTOyR%a$Ae(PtHba֜G`,2AB|ȩ !R<GĕxD*Aa0΁@pՒGs1SfON:Ð]6L5:A$Xw7!Y1dvwJ¥х"r m&Lh~Lh̩`ᓱ 2,sӒ)E 0[n>fմĥ0at]:6ٳ=k 9֩ suy@K):ǒtfWOЙK06tRZIL: :o}(w.:o&|nRw6 j}Ē]٥ٝ-NcʠZ;eyu_K:$*8ʝ7\ ;1qV{tHbaJ[oOۯ^XY/ssxߋ րgKOmB·O$gL v6HRܓ<dVS@ 6 cy"Ñ0Ud߽&{ԙNѽ>=XYiDbR˄O.N~ܟ"P%.+t4'woIGQJmȴ1K&?{OƑ_!e6TwY' quZ\S(٤(MRbQ$6b+dxzgLӛTuoě}lf˜؃/ 'S e\2P'#q^Y (CFK&I;>0K?9bPҎ0GK+K_rj^oJy RRqIR j*O5sN_&<\(͙ ȡG+a5ؓ*Cw]|ҎG\fS+PD9cw7:)tTyKxk0lO 5Y}+ZnX&;8M\#{17Cz6rХ035/n YBm ~ SA$ m UpxiSkKXl{T~s!gg=d܅#ϿoM[M*Hv9NS5f?A[Gmkl$F9=Woηn}uUaf4.69nlDk{?XQ}늁p緃əj.9 Imur Gm̟QU[љѽÏ'?j،2-a?]MalDhm?oAHJ9GwJm˰e4.,hCjPR|4=yXݿ]w8h##Wmն  N?h$/,}6!w^fF@U"*~Q=-~a?]P.^^2k?FN _!Fid~}i*NN| /$/ݏ?~{?^RE9|/m LaO/w\כ-Al(y۬Keݿ-+~tj숙H~Pgm6:ͽޏv)CO- _L90`Q@r'|^%o(r4FA<ޗ(,Ũa%鵭 3R1YfȻBKҨ_9@lP(h.m "⸓"1FIAD;NQu._IW藸vNҡs| =Lx ua[_NCް'S!oWLYۅGZ!RˮOUj&M|agv6HZQygXM8Н'B W4*ڋĩHxr6)& tr1OU."x(o`@r3k#9G(;Ej"(QpMbgg }ٱF1&58O!Y=89`: 0M` T[gdktEi+qNes$"QPHe*F"! ZeQ/"^%t4vD4slTOqZ{orж>ưyv`R,7~.+l ݦiWju}W k5_ٶJ u}845O碰4iZ\b{_(ץ -CҜތGHen_P\^8USaR8 +TsA.sePP~UN*!׉"& )ʐPD%dqfxUG# ?5c}ü wћGP<0]MxDS X4zZ<,Z/Glň~t4?|sEaeX-\c͂k\j}3 ]%@IC@_'YY|fBn/(E>oyZ#c[M< @ C2FZOr!'RG ɞokL5zw/xÚo!^?L)Vnsd˭Fͱ,M &w w:J]h^ @̧S˜ `dQ Go]& lDޓ@ e9k6!۰]ynR7p΃-i$H"WJ6> m1L 8?'(w3THY"!(jk&g F:SC[mgK7ow QNש-n8T\hkzG_v޾dVCW7Z[荬FnO=t AaMzFLJhfn3oaλI2 u)S+^89jΎ?mdiqLx̫6!v5!z&D wȎś[g?a'"7$w)B 8ټ[z)y mٷ"+iI y#x=$ $)n+YtOVg}fo ;źerzӧ,j;+,dKHPϪ*.$3#O|QL0ǃRL DL &ႩDQ -H! Xyvbw}-wSx9wyip }}bmLs;rk7H[NZgCc*oU,~Tʪׯ`xYG7%P-W>c"72CMК3ę!8"YZJM,yY=/v=pճդ#6w]k5r*]ew2;pt(LXQ2QBi Vz;YnԳDji6op˴& I<.t3 1q%x^Y2kZu޿ gO*;@6Ǖ e{t oTmu" u'"/oH{Kک'+_BxdLDAFDd"0Aq7WulV.%eK빱1$IUI$,3⩣rs24ᩴǣv6_1/in2mnj)ܧ+MЃ}S* kZQo+@ERRfv'wjeGQDV(KpAAsɤW)qy :Ud ATTC[ٌo:1[rEQ$8l꾫 jJl-[^Jt]G8H P*ŹΈ5x{MQ8ܠ%#v٨6oL5*Xj#ނ5Ex)Z5[$-LŶWY_A_\_-? mtDnS%>8ceNrGMRʹWeHRJ EoDFrT!Q$#%ez#h%T9|ol&i\;PrcH*'U`.{WxR\yY:E!xCL 6PT;rК#k'.%c9#af.Brd#3ezA:$X1@JUR*MȤJsbk)0㥼慴{4s{-Ӈ[O|Y yei{NP )3oTH :;ٻFr+"|$?ЛMnzANm93eMY3zxQv(o Z(MC=P0Bj 2D_ WU.%vo&\oڦ6FJ(qтJJ(J9* jIJPX]4(6}lu>Q $A$.bUa p%$KGHP!&v8 D1 eK/xG==4ȣVؖ RS Ȕ {:XR$5l zdlt~ҎN`[$35cq[u]Q6Ɲm' TH@DɐI4qC82 a\ND35WȪp1D:S!B4iOk٣2Zά_R$?86'<1\UlAZ{SHCg]ͨ=UEBZ*ʢ.`w3vMmꐏ;(vyb9z|;ce#,ӇFwpw{];9N.9հda!CA4q49HNWwOu'49{ST m`nzX,ExQEV:j30%H^!Y*P*zMʘ_I]ʌfo1S&+5z}Ne\['m|9;{my3?'}޿N&7&k@yfnC>0צكBl_(>RRRwxB;\MؒuTvW2;\Erӛk^FѢc@j B@WfG:@Qq.G]290!*]oźBQy Rls$ U;1JcXΰp}q)|g?umo1 ֈٮOK`xlA+&2EbPO4:0Lދެ:,C$9"<ԄʪGoc_7Ѣ#Eh7Ro* Z%\ʔ^y1%99j9[`ŤhmCB>fZ1G10DCv!`5?w;l7qvzDE o򦢍L/U$_,)ۛdͼUy4,:65MCtf<[*Ea]*TC`(Jz*Q$ʲP'᎖rA5wmYhgD6z!hZ2 B詝ṝ;\IiwIdGEY#4L rQ@):z+}tnhg3LQo.6)\[biVQլ]vE^6fj2 K:[ x);ލ&~~_8_ lB;XyJRK *``yj](Ec)Eh)ѹjuZz8xH(ĎB1$N"6tUm M\UPK!8c۪cvc!o"{~?|N^kҺX73e}#o/*hU}dGk~zE Ku796<(YU+rXtdp+^G`#](/ZRr8G?xcGuE7 yua$i[XK^Ou&Cِax?Wfh(˂@&VW<0 Es,s=ZJmH AXGR`퇶׋1=(HQkPiϩ`TcZlv'x)/Wk~0ӳ՝-]Ca>v~}G$㈎к'$*r`{h46iA{hҍMch]P=|~ɵ˷"6Fl퐢i s'B#}tI1. ͫPG%:uI/`9j]r>`Nq R3z_ E 8¾9mHQlLL:P=+FuDRM;g|P&`V;#D C0Q6]Eӥ{5{ZH1QoT @yH:y5S,$Q\,;:J\툴Т_1?t|qdvS@"Ab< +XDޘ;D)a6bʺXg-]E$F"g[9+2&[o 3H`NU xAfʠmr(Ts6 󭟳s"8y38"XRx:ze=x6gÏevԋUx(FѦ7gmj@hm+~1R[_$=6z<o|Qy+#7͆W7?.:3?nϿebJfCv\rmkZT=Нh;!95}bshN.PFmO.{:uMZ{ǣ7)ǺOSw|2!@T x8hs|p!X XVBuC T,1xuz=oonfgK#jn'{SPSq :T#\@uH.B!ǾL=X,`Bcj5fT-hM&ᄍj/ S*vNvgǔ]Ne%?g'ȭ.2o>z*x{9p{uy.o}^[g C? +Wc; #&ubZAp FQZ9s #`(a/Bwr nʱеX&Z=eҠB4XYʦB"M&D)k5%#vuk55NwE7qv+>vwoBws*x|aMklvW840MWޑe.8qw!;Nb,Ϟ B 5ג x]CTV;ɒek 9=&gGrv~>beZwS ?&=nÃ2~۳rhK1ѝ{|K0QbEZt7,K:!ϚpD7ns˳SW.o͹X6?Tm'Dr8Hs>'K2QyQ^'pEyNNZEy+-G_G^zI=<֚ӛvoPC"CH!q(Ug $HFFn)m!ĬQd!ȃ1 %8ֱ'6. G2+EEyw8;<M2:;]Yǁ?]>r~ky2c!ÔѠ*ISkE[TZScYcRRܶVF#D~|E僡\ ~f(LY1~<1cI@D ZNbko_> ^k3j c1.Ox2h0@1GJQ,/WU1"--)ƬΣDNʨ4TJL!b'Vt1 lƄR2Q9NTN !ťb=S,-G 3jmJl@c {<\Ad۱u>>kUj4]ަ|?vu۝<[) y|m٢3_s-MH-m1$dXoQd}\^< dY RӖĂ/%hyt%ˏ|U_.kFe6,~q:>NV3#Şn3XCI3vGn5lV5J/dɔ|'X p&h\SX.x35˸]59ZR=bF]7WDz9ꤺWGOb}$NʨWF'3몇:ˇ0.QX~|w$֠Exeᆱ^KY2sdcI5J?Ϭ凓Q?_ \3U;dzˣxxzL߾Ƿw߽?~wo߽Iכ~{X{#AIv7?o>qgQ^iyc5r-%'~j"gk"th*3"aA Ѥv< v9aF ֿH P. %Zol. (䥉FKW((KADaDWqutr&L$/gBB& DBc׌AB:=9:CNBuh[j B1_(Vl)N|1RM2(HnCSvS&Хf>JzAT&) &=ޤxB6"l2Nb 3icBRR`R ( ^KM* 2KmF ޙs|V);=FB(d_V<·"~^]g''ӲN \^[5jʮ)zb,KN=U 7u*DR ]U9 z1bݪ!/}t>6?JgSsF _IJ<+%0p8ՑS sH/? dY5>dxYQ3Kc=jY,k|Um>g$A ) fElUS`lu-ضlY-9ϾR@DLys*+KJE;LE[]x84eZ 3Eq1HPJ1c`RiET&u mrȹEug|l{O^2kh{#dmpW>X^R~oF!X,HAҀ'\N֪ZqC(y3٦ѐ-)DkdRQ YXT\D90hTk، D)" 팇Bh, a* չMϫ[eMUqbՋ?GG'EKRc** J±J96JW5> ʦoE;nM%"U>IBJVdK0\8љsRc݌%v8+HfR[6r7b=TVMJR2jBz.'X@,"آ1c%FH Y*[5)hd \+*2A&YַiFW}ҫ`ܙѭC%j, x׎Yob$a)F *X ed*Gf{t'deִͺvBg,dQ*gB 0l&-*oغDlF񆿰2.:풇El,q\ip׮G`ePԚi>F=vޱaɽ}|<(yx,\Ou&8ugA>=Fg:I:VB|(o0QtClߊؾzf8=Ɉ֗?-^RDL:-.wP;!QZ"RQi|)A*}TA(Z[s07:J!#X[_Zc*y9㧛R \;;'MxgNUWt^.Շ} `]m }ΧGU}Bh)| ylzΞK:@(c^8ynָb^Gk,0b7=;=*1ee7O{<ǨV#ۊf4E2S_v>|cd 1,;T"uj":;^oyݣ,B&Xݛ,J4EXEY+CדENsm**J썸qU;Sq+Y|󶔽vav/|=0p+/lYl 9'--j UTLVKI>T9惹b ]b'VuNNN0& = 1n߼e_Kyzz%b/d}BFnUGMN>H*v,t6y^Ybr' .K XdhrtN$E*ZSrmۭHr4kwZ}bM;<ܟŭ Ԭ: +Eǔ@Q_` ֕6c移;lO<(w%{k GYDH24|\cXN1ReJ2MWQGnkga2ޙŒͶȮ1쑼\D&@"DI%,uoE2{M{c%۩vCC55A'`h}oe#~= !eyR\yE1G/:McM鎝E<:#yQ2Ig>Pa>. :X=伷ΠtèQS\0`*gC.(kmaC ,CQ![S %#()b!`S>Pha9[cg]VQ Qoϓ'.Зf\_|Yp_G>gq8`gvR+GCG^ʫv}whr:Q J΀N yQ)!֖ZP0CdJTZx[_#X)E$HR8$dڢVh/} Bh)*&5s?8bxV+ hjP*γ5vo|Ԙf"ߌѫH*33R9@`2+;ѧTJ*Pm¼s(4JÚ cS)8P5-ggoJ=~Ji[9X9oq|U6곽ǩo鍔-7>44k:t.,]_&|+}>J@yTWj+b97|a;'2}4TdDea :8Xvf0՗nEWM8gV;Q阼Ɛ0K2t{WF/m 82 = c]d+ٙx߯-ɲ-K;qb5EUŧŪ̵2rXSVr3@\OWIw`+X_'PtQ;|2GYHs-8A; LU ixtY]T{T[ARPJ5RJDžG%щHR{mlRsPr5Fe*FRDV!4gn[24tuGC?&ntR̞O}ne |hSuVby~ >:'I!|řppޕ+q h8٭+qW'?^ (KCLAH IyS J`C]FlМW30{)kQrrVէ1O`\w-&A VFgTĎD-ZJ&UU;_UU & I{ɺZ ѷCvwÖwOPYEԜ@2vp z1b6)RLkŢM%H9g9M8Hn"* #r!X)E<ڕRI(dɱND9]>?}:]ZFr#Y$'C fC{a0vT[V9vuԄ|ZN$#ؖy/4ɑJٌi NDRq> zI VB`-OǵN JelMm̊4B}jXmUsj^\~^(%OiO&+]*$%~K2?wgMYDS'뻣u/dc!Y|_RTp C8m?\2qrN>Y#%O&x7'+R'8po05b1GW*h2#s&h\N.}M7nx6IuV8uF}ҝ~ƛޥHݢ~mof-z=`8gvӫ$nՊG4k'i-jʋۏ;s5͋~]u5O^_O?x̖ce ?̃fm]'Qo8|}U2#/ocFj.I#]Fa mOe.>nf =ѿZ79Xu6Uz>ꤜWLF篣q,]_O7ʽ6N e:犨VʏX21r ↿%9i^}Gk4_4U^vQSonT q7Sǽؿ<'!)ɳ۳gopwg޿}O^0 H$8|3 Cx%mVmzیkۜqz\--NjTAdGV*Ou|?g,_k=2?lGl$9D`vH8 jXIp+R*mX${ )3;IOmi09^yFFN'zkdd3\{L*)Gf 2h`PPC*uOo) *wWh[Yett Mӹ|4V/rJ&y&F 5`)Z Q=ۿiB},. [.@6{Nt nrՅh! Ҥsf*s0$e)IHJx)dgu2v@26&8C~{g%hk~hԝ^ŔCOxU}MN (2\6߹*|NQ(e@*]FS8~KW|ۼ+خ+r?'ϿsYl;'@rvDq"`&Nkĉj:8R1ʼn84@q"z|U I Kb4AJYE@<'묿AW@kh=G1΀6&)e$PAu[=*,ASP\}+\dK/>'dn5r{.}w]z "+VԺKi{ZdC6Rլ;ܺ}oo|jlB07{Xv }4ǔJtv*%í׶0(I1lpIJQV0K!2Y'0 gW=H䐘 ̵h-}cu9}gଘM+ ˫Ys;=y܏juҦ1_ڷ>/>el6N)pCfsƐN2ڱR)ta';)=(EF9JLh8+f`dT$]RQ@23Э*nU&,fx*1 JdS 5+c%[#gRwaA>ݳU^NLjnPw{Z_MQ\NX3~mq?=5I`mB̂[+H ΔǨ"#CFq!KЭE riv4h|u:Rtir);e]%u26[PsYFs.&!Q'ZzvS𺏃T7tϚ襞 zi dhh 3 Pj up1\rhO* }-w  Dw-~L,g{"[Km("FvA!haeS1bTs^Yó(RH( Fi(mdAxwC҃r_Cp["o8&L7M^ʛa^i*_0b0 c6D'mr0g:cqTԖi4 ŸXRwFm!0I&PfY;8Rhurr4?xn;'߆$r9lB)`u ]Ȗ3s ) YI >{ r6,s{3;lɹ P<] C@%Ig̕j"ʄkm=ǒ!``Q):׊+su7fq׶]\x/s3ʥotWt Tzg8gO NT~@J ^j3,TQL?,.qoj=dG)yڿADe}#-Y:gA#nvȽccB19n3vKsHR(IJf\VA6J(X;Y`ae2 Z#g3ɦB+vS ]cq#ְ< =7#JIs&Af!WcIY5d*v 2_NLk?A#[4C3hjc6WG_uYb6tVێr.خhHtD@1^ .rvR1|R M2ZA!ylU]\El^MaMͽ{j]Je+t7^L:^&<}stJ4L RL%)R.Lk|FTN-ʹ0vq :yQ*t%/2D}eZԋP?or[l8\(94ygG+_]uM>vEۓ#h'K/4~P6f m !]ނJ-+pU%>C,KhUDž+|쯜ΊG<Ê쵲W#53éw3~&DvdQq,F ZB&h2d\VRIv`O}<ׂ!]/jrlX)7ʱ$Mwv>ya!cB[KYd#muQ#A~BWt3Y-y r;` P6b4bF҈UJ#Vi*X4bF,Qn*͡l*X4bF҈UJ#Vi*X5l*X4bF҈UJ#Vi*j7ۈUJ#ViYF҈UJ.6bF҈UJ#Vi*iL2|rb{ XK*.ȧ%Vh5bi{Fr \Uަ2ͤ Ryn1t6rAPYt,3'h/}Yq'W+OVVgXoI?(3Ĝwg&ȭL+Ng 6䝂Ҭ콄tEê>73vIU2f0"x6C3ՆG%Г[ZrW7:_.3|KrHeLW޺Oiif_?%ٖ/}Tnɹ)FDXȍI3EsdR $CU9|&dQKTj}(zipӳJNNƕOpP($(˭PA'EfZd!7}[Xv)e9q0d d%+ aƔ j*<*Yg^Ԭ]J_?.6 vY\yh+1(/RDAF`4Jz TO/Ez; F$LinǢ,aMme&G *ՠV5'5?i]*K1ht cCŢ.eVz獃>3J1IӏWPM&cXd?j"rjʷzD'yʄXUs&A%bA] /J͜Sԣz,IW&!j[ ~5@/I~CAw'@-Zn]qxqlؔet =6tJ]LsTȝuSJdEk!搾%̏Ϭ~u,UPuX R"${Φos2%KN|I $agA ڨFI;d*MLygnhw Pru (MZ9}{G.r+ J mFKZVpliyUBmOO׿+U &pӜ CKV8zZp`9Mg1J%YHޣBȭF)V^݊쌫B dC`RfK(,)%VNx i(]QGdKF6^Xm?zذ472jɿccx#d]qfB\8B*y$Yu"g'sdeS*;Go/ġϦw^#یc98k(vـoT+_M)Met.R]FԩWFQj*k1+7Ūd@iM5YS}`X+iKQ\;pAM?\-|n[qާً6n >`nP4Dx9M尠TH`Fя,4c|ųi| `>ƣT3ug܄iD,Ϝ˩H3oP䊮`oa̓/4^3n/y̩^)9nO\i%Z$&5DvϽZͬR՝%ܕZcp1 Ͽ3ZJ|bea\@BcTNWm|GtCx[`z|܄I]ੋK!eS-ɝL'DŽeݘ>HB]J\%Z8mСR%P֐壱}cw^/ f+1JpKFh<]oqWi-0ֵf+Zu2{,#["ژF? S,xtu??Úݺa1 ~&V+N8i,}6m&i0΃6NUZI[V|Q|3Ro~+:g/_SqCWtFe~8-T*nlUchx7D$!&y7?~_}?Nj{^o{Kv }?p>Y׿l;siZoDfiفout]%زGdGַzl­OoVk}IqJMd0Od 9 ERa%2sF*+Վ\!%5yd!9BwTqUKj8BZ"9O6]DWU}DJ |RNԧR@ر(Pj |HvKpp3J#!X Lefv(2L?C<L~t2Uxh=eE\d&yV3ZD@'tJXsDU5l3-jXY)!23$&!"I:j5p6skja^)@3$-5OHHOɒ;Bf -!IoϩSP (`*Ds@ gg%+!舚'4(ahڿBB}jYGEK% 1PL^xĜ"EMv.[*rU_ʝ;<ҰC.b9j2ЂUDrDN;JxpBhDd=ݦipJ/=}u׶UNƠ{Lދb(Sqޖmr*m/ cDZ|;TWoGm;>|HZ%vBq"`''RTDẔlj(D8W|T0KCh  7B(# L H{(z:σTPeL&D)eT2D%G'Ce`ƆTUnRr)i2mJUcp[,Fk2_\JGZ6~ 8[t 6ì?6~8r~x<ճ>~ń+.]7؍ލv΢ή Snܵs^Otlk ׂ6)[Ks޶~2{7䢒vTohtCt8G .l(U6OF64aӮ;Of~y򙲲K#xfn=o$mo-|_0qtvH]vƃ@C4|fZ(_֌;֛ɔ  chזwt92概^Rr֤=iY5xjWh5j;wТ{srO [& -\6iT)S4j13nZ˨5H )K>&e4VЙ 12:lΖ[Bss;㾽^bN,uXf&7wqRna OCẃ,К$f#YiLPhs GB!娲 9kmB&yRia9NI&HO:"#-HgGL!B 1)HEGK^DF=?#"X4QCp(&>K( f!>YI^ }:=ŝt{EU4Xsa1fJJRs'4ESWUcO|sy`^I*Mcs| S2^H!$nnHmT0x`f1֎*xd4&nά2f]v&sI̽XGN_S[RZcm P ۮz\r4}U@&fR}`4ͼIHmL& If(e^r9d$TeUru% U+ig sr%/1W}-p6SJ9RùW_v]:Ŵm^žJ}5ܪ{p WZWrKE:+g|]_cgn7o"O,(xZ^:uȜp,DoJ+  sȥ 0>YQ;ꮪfiH=DنD+I19x u$"pU0 jl&~n6k:Tjxr{m5]N*4*Wls32C<*EjOTHb l,n0n7wa =CYrA' L R#R;E͵t+pTmX ͜q,f ye^{^xr%Bw#lpˮmI9YVt@㮮αs `bTDAآQeäLFK W*+K[(^`{!EaRkFh2f69̂Q̱xcհcW-*sms^]I೒%s2i2BH?EHK6.x$Y$$ 9d2kbbA$0hGD*@Ju2 8aKYx4cW(+sDsĞ# D3hPzQfY]7"gL 8)ŌfD.|GN\Ҥ |89D_̋2_/|q-GHZe>8ùHg2!DzB༲狇GkaǮPU;z> PyvZM#7ZO ~qm4 rsT\J}g;wJ Ir)AWl( }KHH#pbKI0^aƛB#Y,F{Wȃ׹-.b!1TKNZ ~0aBl6.Cj@d%+{,S.`JK|r]iGqɥĽRV{AnGD)R= m*EU#N3cDH2rW^v֞Gh;bUʦ71Zc}#y1b%Sͱ2g!H:I%a8[: "YyN潆N<5۫NZq}C,KtTjUV|{D $J:*!K6yd[yao~Pi* cq={]**KSR3K< KJRόl%e[5rH'NKg`Í.nWJtnsGh|1r:.g >{jl&Nx£Q3^̹Qă'5 ^ dѧsLI&Avۊ.WYMhȊrGSO叹b9en;1`64mߤ?#ں!OeXLUzVdF̊ԩ1* m,rV* ug缎8 ^Q^nn%yXf~77ti#ý6^R[R=!/(Er"\\b"\Z=U4p}.{cY|e(IC-w* w+WixtYw작'4}>`l=,gD'Wxb66Z9}(M2#aFcHМA{\H"J1u(ib.+n5t,,BWXcmG Z#(iꚆ~%񂒏%UsV-}zZbU{\sY! SYd:_ ļddBURͺ6* RyWe{`!pǍBBV\Ael "YGIgιD8j iV4O> xT'5Njz祍2 p5|AЯ(t :8HDR.o'WN:h]G,PԶe7wH "[Km("A )_ʦb*ZgQJ$ Fi(mdAxwU=b8zy؎i#1ͱ'8-_zGޟ5aCR"TlXb2>~~i2o8 W?c4\tDåқU޿e88Vxu7/x%G"$O&e&>(cO~YG4G???xz|x:m^$No eb3:xJ?Ej>+̽38<Ήټ^c?N`Mb}{9'a<="Gns4B8lov:ԓ,楛L11YY> <9ח`rPHײĀf4^sa~qE[RkkZN( )lܱw8wYf),Y:mg ?} +|F&O/Zmhx23dƉm F 4wysKK,ea^a=2XSƣ[^-$=R)L_W92^O n{MCߔ;*zel[81bI.I垥 g-Ti*a<2 Ko3;v EK+x4l4ccNW 0.vSw9Im+:-GB;&1i7]Xg 6Y=EjrB++z`=gN цSsikQz}jErL:@L(%-gNR|6smΝlϒuKi)bB P<}, Q9,Jl˃ L{Ϙ3F1egeBɵ# Sz(1W{ulkqW=![ƅ{^pi CT)Z)?OclLVrIf%Nd"NZOft-U̯SymP&/0]}#'=O{ Q3p3¡p SDsͣƤMRy9d-J}^fP U.qHJ!93!u.$N!2(ǂddnF.^'1d6cv>"ʩ\O ,c>AOu@usZgb:LxUhi^ADaŴ-rRjX5ڵb:9fv <P;:+WjV)|EƵR0QHKlc?jJw;{h˽wgvGP; S#+d[IQ:b& ikZ!QYZ]3!R3g-\ JRI(ddɱZvFΎ{|GmrhJnZNi[]^%mwܾDK}rWrs|Sh,zfMN*r)X*%W2yK𑱻mNT˲hf]p$jIAXmEI ˹%@OF"V#{AcPA#\}2a݅F0TAdhA%lri#Cz, $ia&9B`0+:]۟fk)aSǒ"%$p",u{YŚH%lFôP@'H }rTcY<[鮞톓7~;W k\IW:kZF4/cqMfyGBp4=EOrgWkm7͏Qniզ{U@W+d.$vJը*mޚhgц2-1En<‘5Fߢ+=¸X2)^Us'P4&GZ؇QY7ulU10ثF=gk˃V;k3-tϴ|W閇IdL_qck>d(z }61 SF"FфGdQMTS2yb2{1%[T3Rj%=(X,gl4 ١/CE@&c!*OM.ZTH} X-A5A0qzuq~92(2'璐' VSML!Se=jCC%\]ؒ4oGU z̖r@njӲ~)EԳz^_2Pb?o>m} /p_g?[e,h=-wZ.uz̽6bwVV{[Yhm.3ڄVp˼y޺yi:s>7Oq'ޯ0~z?.~maL9ON!D1ZgZ1r3{pܙ.2]2](:/&LFgsf׸\8y$lcsUjZkXkId0,)ECTmPJ ͠iI"RҲOaݰ`b5a2Yl\> o9歭8s2m9m1l⚝-^»_zqGap(rxn۶SuU [g݌gO˳ mڧ˨e7=_#f\Ι|nn0t^n!npgw;waV5on]wJOeubH;C,ens(0D+u+wR~]B2Q3nb_a`B qw3sWxb/Qp T6*!m \zk[/r*!ݒMX'L6kTNJT(뛱[![.3H8gG~$O8&nOԕ{5#gzF+ƹcdOLɢn}Ȟl,R}˵AHNXI"CCʢ3֠TV+p} @VR]Na&v 3Rc<8L?8#gICuJL sųFlĦā9Zgc2-֬fOI!%~lRn\a%mϤ2S՛UU&m)Cq8=:_);X0-/`H_F/sM09cs gr&r"X=?՗./_<8J;n?tpamwU) nN>P6bo?_\wV%8R/':=s'#RO'%iJM|Dsۮ6ߏø=+-w6>Of sCtA5]Um%MKD3Iƨ s)c#㕸y"-dLdܔc͎j%v ׵Ƨ+̋lhs1a mkO{j/39 >^G?]gjS]MB6()/6XB@9މɄdՒQE!#y4&v>Q %<9mMSoο-­s~~~6| \,9XfWNCŤh\MDRJk! lCwރ^珫Y\y-yfgZ7+|;4ҝfptqپ>]菽J}8~>#ys)yˋ6 ,ϙl2h7hIyET"Np#yd֑l +O،5K{F_=^>@ a?\YI޾_:z!,(X/|{_*[1`гg/|qZAzc+N@\WǿoW^r|@iUyh8KD-ӧ4Vw6Q8%>@Sm gB>Zz`- SpkF=WB@$TLPjӭG S[I3IZ{tfG w=whdtДcKQvXgr?*83;U]ܷ1Kwsr ):?:Qo{*R[K6%#1J6GeĀdZ%{#e:Qt2};s 0^ Xq3,q9$H?)MZJHlO:썑HBH!{\8^K+br`+EcBrH.R١9#J$?A禒_']SvI~w~U"nsbÄ~h>B~^>~(^hfȑM}|+BE\`3&][F_J2[0˝\wg}̵7-kYͨPD /}ƭNj5R#A 4`F4mdBT/m qO>0qv]|fPOU-b11F*CK%esr(qv¡Ђg8y7H^w0TRېoPszJјְ)QDꅠѹj ZtI3O5ejcJB:elJmbJ1ȋmqWűZ{Z~~= oG;EDzz]amٕl{KSt 2 6."}Sc}4sȱQMmXk/I'q6Y}[:FO貲D1] ) Dn@sö>^/G=٨H,9& ^z/8X~>Z^S6~+w])4 i5FU_뵸׸w8à# GGtri|yf%IJ@'Wxgi{WL;Pzc8cvdrౙ$=Naԫ_^zepKn2YK+! &zbSU`gN..ƷRڥutũ]JsqoXJZkӟ%!իS &DLISҦZ_Z{6/YP}؉YΗ \_$u E#ǰ-EJHz:W_Y/kѤ ^n&U0^huR |gMI"tU3{ka)>Iu.bN?GMgRFѳFy"ynw H:0$O';d GYtf9BULkҹ4)v/w/,.j'نLE܎9$QDŽ p*ZrC#޻4@M<fzrS_N_kVJ,=|kbh 3ZJ7wr3ӛӖs Cl#B/߼}]6w"POa}Q <>E_t`H}ٵu.Q0klNcӽq^#sa2m,vDS< zNSI *C!;%C0!S_j;cƌL"^ˈ&­IV{;n/~yTh+ݣr~Yvi8))E/WJ^.EZ^yÄ+WDKb1S_wL]»O=-9[soat+[A[LP͆⎖ij10b s]?Ϋۧ,wLsw1YN#sغێwO^+;̼1r;>̧-5Og O["SX{ZYq?j6oɹ_aP犜aP! ts*գkBMT>N Zv\!P_agRCvqajXFݣaਛN  7h<jAH=Y0J<)c"R;gGCEey)}Ն>&cb,gfv#&==~ϓ=x~ڕh\FfFbT@##QJ YF.[g1|M!:mvhA`wc=fHjQL5N 7,Rg2DOy%LIY,-5;Jaظ3;g:v 9i3 ;`%JC)!1s+P b<3/|zZYOy8ÌS>Jc1ic\jvE H6Fk<{85+sN8O׍$?|׳9ǣztԂARW~DF)4R*pi@xG x~#Q'ۉhB]I On݁H^ꂉ'H Ul+6~=*c0j[|E"}<ՠ˥Fc=}GH{"qUF2RXed'I@b]Nʯr@qT\>gƳ+b4"tw0^sB/& mA_h@J3SI†5zE[xfY:Ҳ2mR5hl:,%52I:XՏ['"K7Zf+Ak]Bf p˼ 4,筝%dFYEYf5DFI@$H@GN.ǎυg F:qu4D{&,HLHxM= "2aۣjLc纚v.?/9>pB EBF8Vyd qA D-!#=Qm7= |j̹W1oP#1"ȭP; ;Őv,Rd(o˰\%4sxC=?OVVIlyӮ,0L%O2ͫe!Y`Kt@!pf/_Y(bkd}-Reh6 dBp``RP"`x>ϚJ[tiC(@ZX.*.*^/nTQ P,FSXXZNއlfJ&R(_z kxQNQSi,y%p<3>|Ɲ]veOLcizC`v 0B5ZEOϪz}FUnWqouPӒ? VYWOesЋrT1}fmjEfFʜ&@"dӘ%q8V&Ո&;Q9-)g1^xD4d_38U_xz_oBUm٪L⸨gn<;"0w_]ѯzwGo^ {Y@ |n,/\k4rﲮr-~}c;% Q]rĬA~A7UiӲ*IUI! ft )83L̉ ,,!&x-֤W>.yI>šG% QD=JR@F hM3 jr AF.NN:GNpQxvֿC\;Go_{[Cm:."WKDAݡ!L~4= T< ZaW0er}?kX`b 9ʵ9\F (cE`#GSÏ(wSjJ̒ 0AsF`h&8! N4H:CQ{0rHRIp:ERS-yA)@qZIDp:jgܭ6֠t2v}XQ0Ssk`F8ŤcAS"#2` kB5Q4 Io.HSP _OA<H%FcR+˘5!O|~"0Ze;zݲ8, :qɄnUD%*!јI&rw`PMW}ᑆ"kE( (pE cK-K'^X{M)4(4΢ܫxEBE0G8Z˾QM=}|I^j$95qJhuKd_c\@UT(u9:'^,VɃ 0q<d*n5{|oJ47AÎs?wVD>^9xf|$1 0=ww tFjOV[_c{lCcdQҲɒx  !baq؄lJO؛E5J Κ?Uu\w{}x?N#a9^~D5^O"^c@'fg S&h$ÔBD̼HL3"iB8ZJ.h('Ήc#cc )s5aHf;&Ag7hHek!lDɮ5f"cj=1upt8Gʽ&{N4>>|,X.eJ [+`њR9S D]am!Bнx ]gj@E/y=[r寪${ɘP9%e簢ʇqRi zm.+ruumSajGp^ ]Wc')̴鹹Y;:(|z ʇ=jDc\5m *U2cJYa bZ&-xUrNC-cjUdzňXNMކ,Ֆ)PmdmjjF[a9 Vcml b JsG;#__ܸg?=`'-vk.1_Q 43f6rO͒i˝Z Ɔo=xf]7>#rMM6d[fJa9l|UP̹XaV n ÆCUTR q6g:V*@9X0"b¨XI"C ʢ)щ8J/'B*(DτsnH}< JW'sc-lbo' $,%Ո%nj 12[KQܨ86q+5&&Pn,|w%k׺ADʤ$OBq?9l~}լvq=mkNavEXb8Bh-ZWg 3As^M|yOagcG0'0aFϵyT0Mu %eс2ڧSj2 )`Ă4q$y()byb(ي`^N( b'Iޠ^jT89C^Dq H~lL-* b,YosVa9MU1QƎ?O1.сJeLcݑ&W'e}8 *ԯg*tBɯH l]̨")]\HJ6܋t)c\ER0zOG!trϿyY&*{GS%Mr [r9QT">#k܀MfιZm-`@rb4Kj3w^S.$@Q6UTF&a.6 0_ MP̑jrn4)}K4 }/Y]MTs¨k PfF(ݒ7DZ#f|c\{(S|c^.>'aoٍBi!B4]]^8Rsla7RLQssS0Ovл'W}^mrw}dU71bq3эWb4IX^Fcm>ۅv݋>= t[ǷwOucYޝw_}(wW:_W oC9(^էՎ.=z=l^_I5.Hu3 7#gPwcG!E N:lޭtT/OtONmsuG !Oa MObMPL8pVz$`v.׋kY$R(ŇR)DTxNf4_11V8k{kw=w&6 W_C~}w}.Ho#v ާW3uqSz-L]ZsfR3L픜gz;SWo'TLZd3xjƛf&OWl;̮0Qaj?0wY;  ͸BM )6,06juDq4C6LT"`Hݯ8x/9KN [8-O]g8GU: I3u9i*.ItO9`wf f{wD$*J_3ǤGf7ۉ2%MGKT9ڏx̡ԸºTZcŢ4wڦZ= ^}^D}st#t M-DOλՅ,WL5p١{Jq- l(0FV1Xφsm7fzaBϡKǨ@.hշ lZ K`'S_-{gH"zâ#U[Y[C}~oFXqbKLHB$L5{k[MRvRCX0UN[w#gT 5T t 0{(I3@.ĉVqS-ޕ1c-W/RnؚxTA/hw;oA(@2$Qkg=kZy).3{Sb5-_mGDe|)Sϭ=9Öܔ4Eңl XlQ%DVN]Q}4D`lV}jOt~^2:d nž`H=wƍG俌s.&u~y!]N!MX>ߡ92l`By B6RYrĎ{3^*(̢{T[@}PJbRRJKQ#cq5[ 28,܇`W)$մ ll"@(6#bi9K1FW&ahzgH8Sv=V|68)ݨ.묌1AtR1_*$n6/~kJ5)+"BE^/W6o Aؗ秧%m'|AI jsĐ&L 4I~Ivv\2̈K%IcG[F;ֳ`D1 grTn} %CPz[L˕bWK)e2CZӪQ!qA5bP`z;L~^=]ow ">8&T`'G+!ԫ.5o`m9dS.`E23r]HɒuTIx\ Y*.VI簮 GSV'΄VAE;ʖMzjDLF -z`SL7&tBisb1M,$bQ]5{b1,ՠԈZvRjqh^R>Z6 cZ[Ec0ǮQAm.bT-rPSHJl,8ѓfѱإ^'|A~/k šЗ]hZq-/sza @x>wnHr3''%m-7j_HzBR}zln)CK++~w] !-puT/gZ p`K (BGSjYN-aKLÐC$Bv `h(TH5^5E\X7B}1).DH$ "z뚩5fjLٓ¼ >{&Αk<}Dӫo=;dEŷ۸7omG{׏6jG o+9݅9;~nO}~qsgz^ɋ;VȽ'83 @h}OKП:|>IGoG^j}`1u} u#j=~}><6Jܿ.wsu=W8P~lkt#ۥys7C=zLW~ b*Oc_>r[Պ7]?{ƍʔ_vsʤplR\*v6J5Ec}^$HI#U%40Fw()J &FF1XϹ!52m'D"$^>'`Ct$z̐qQ QL5N jKp3"اHXKI IXZPkwDðqmC?Fn0k2s|^9hliꮡub_3EvI5gRdfC/R> ;%JCKI@ 9+P bo ,tp\}^D/`qTda8j'Esp;)Lg,:,.>Fv8 rcxn8 >l>cN1:85Xr˛2쵁H?K&{̌ $8G8;ÓSBX-x*;3>#.s"0b2RV=nl# o46+VVfk]ϛz^~1V~+LkKW=(\EaO_'n8?΃~mPԶh yB&DYѿѼ?\eִƪ4΋g*B3/2ue \'ٔCƂ5[ŶfڵodFKyHVx2Wߔ{p4f>2&UtHflrϴflR;hLR]^D7,^9{`,;6~JK62/vd.s5_ '"pw-8n>xV]eO=Z"N(1<:룋*2JP5Tqe`;};7 1%_^iKq-Ċ}:ST0**4 Q[m7!-C)>P̹48 U"Xzā.aq1kolxcصk {+ݽtBz˪GsPJ@ס:$σD=|1̞3ٔSR>Y!{s2)YUP5āj+%qT"\`"S;<adP*r#R6PkUR1YhU"bK(FT{9A [Vq[#g ,ɴwZU[Ϧc6Oᰳ&V,]<J5( LYbb]Ϥr)Dȕ`x+=DCl2QE\"Wjx6JhIQG qz Gܔ5rg.˭l;Uӻa^\a+ԡcѮG[V6{ u'@ DJi .DpRQͥHŠ@:2&LIX8FҎYJZ9b8zZ~k~R}1oc+S`aśWsZ&ɷW`JO!<;.pK/E츸bH&W<9X+0)")JA)opnGy?tzVDآN[lB iatH\*^/a݆$[Os^grRJ4Bf}D4_C)@S$pgx788KP XHކZ!yjpIM 0Eëdk/) xa0<` K-gdt*HhwʥA%3wJfڼ2ݽ9F1ʤS <ᖠv],_֊9v~^ cC:.X<õh9ÂݼFX4,z&^L|ɀܛOFHvT1a{y}bzU(:IcVA)fm"ݘMmfфXEpUz_hL=0}{ZG))LNMm1y7˨lo;ry ,gb9{[݋2 &ʩ&'%}4A\KP{VCj!ĺ0q5~Dq|4*KcWZ]\e*U'Bq#xŠI:"9rS3'')a5A0MN㙜$R`{Fz׳`GJþ> bƝ;uy%H I{Hw7sjuxW3gwfO{ x13W[Z8$t1 ^{]6[![|דkC}a1k=bz6!ٮqt @2CBA A c%ic QXqW1&B`QQREdFF1XϹOXװ]42m'D"$^ظ ` !:mvhA`wc=fHj(`1*^Buک4>v+oRm6jvu5DR"R?#fc iÁ(0t3-|'zEO'y8ÌS>JM TLrX*g@!؅yv=$O+'ƍߟ_*u"h8̒ztԂ ARQ<XH#A&X7|@>oMբa~YEԭ-M`?-Km񴁫F׻U.!҉ۋ6uUoxP )!5ԯSrh{ۋq` ?#݃GCqpڸ$0.pń"$e[ ݯzDqDlZ#{Gf5=UU0g׈xn|iJMIE.Ǿ1/+X+3 “^uU#ŨM0pOΉuOY|4LJ%'|b!:ZTF"UW ;ӕT_e)L);u5ݶIZ^?kJW\]$Km:])r4}V_r2~Z<ƶ*Qc nqR|f DnCshY+5w#q+:G:ag 8m>Օ=a<.ѾkGHПOt1uI{oڏOA/Ez-/ښ'7^M=-^&⯓opP9$$漾빹 S-rLrʝ7ـsg CEb_RYt rϢ> ٕ㴂JQhd+^^|>d//a?t;;0 y3EwC3c[-\Fmn.<0I tLV-0_˛5~{Ok6iMɵ-q@->wYn8!rk䦙vS4l_2(ɒRTi%FGg}tQEF2P%sC&O }޳!e e6˰ (~׎W$ GA%1i!"N%bΥӀE1&(ā./{UC>oTC6m{k^W=bl􊣔0wIRR#)])n6!Jg9Fp)(bU;iRW@Iyʙ3(g8D"E~9v|7{p!ׅϳ }zfuB_ڰtl>{7hmTF-#fW^eI^ݘ/xeGu)Ze<7u3n7cڝ;hݛn5_o](k<03#[nNvw82Z)4(Nΐ^*cRp0?UPXѼیe꒯CC!ݼs3H mRÔQBb{+g+T+abH|6\M7,0 u/>b ۰Ne XC#k9x(E0'V$eJ lOfnRKVPܥ'%'0N(@P̶HNJ1ϐA .8pA .Tqޣ8*mfgO9: 9.R ~Ӟpuw~B-@ E5/ ;)‹ V'1,[7{TbQ75 8 sR DoP4x<=^epa/Gc_G(bŊ#b_Ԣa@ L!X~0ERLݑù_ϏX͹]tbJ=K Ʌ% mU T&}8]Sru? WŅHuEڞ5ud9T%0FVTmu #Rms ,a"L/O!F \yUx~8ki!FhvhǣgR}=7Dd:2)~3PHuλ!H9Mfyq2ӴtU|:_=st>Zp{ !FzVFQi7#2>Ϊx :ERzcMuݽieRU~4w)h) Kꝁ䠩 ~Aֽ G5]Qp?GMӅ]仗/2|ݿ_oa޽Ż:x/`:"Iy?/~~xϡavh6COL6ʜ1WMyFqݹT"i#f#A389s:I oI! fat`)83L̉,XXC%L(Z$}hC]J|~Sġ=G iQsa<s߾QL]Zy蔺Ʈ&Ktg)wϔф35`n%\FT (cEe4ўg4%%LI_]&تc cG4B-D  Np1PF}E !I%dgH e;Q!(HN( U2Ble#g׺Wu1!OA&*pIǂDF pyiM+ L28H GRz1aD`-JL,c^x҄`=񩧁Dhul u5T8 :q)&eUD * ͤјI&rg}I*w!y,}6x-#- x.ԃ5yc#G5`k U cXjыt@Hm'g)Oc.*5X@UE\VATˑXn3_uӅ=yl(*>I뒅ZPOo$WZtsdqґh_y?}^V$ݣE׳Hݐ+w\?,s)ȥf:5 m]ڦsr Df/pU_f\6-yI.Ley&Uq۔|}w\Taئ[nl|Vݩl0oYOs9XB\EŞcGºlAʕ4J`Dm(Ō`h̪{PV/b'TY-&x#32x5sf"(ܚ19[$fUl.ęu!t;.{S..e~>Yuۜ3x=ǣ/_t kD cQ6cfrAMNS& DH9\UcSE#iHΞQk %ImR!`V0/2_t`ZD"rRe٬to}ٸcWM2km2h7?`J0PXhJ'&M}Cg̥}0C`#8V ,Cv4H5> 'H 0C*D:Hc2llևSvL7sǮfֈtЈFe )$s %Td1*l6Z%҄m1Vbt43+ 7FzHLzz ",ir&LR#1svjQ٬?xlOl\^d"xO"AuFaĘ88ĸ I(IO<ϵ!boa.UpNPaƃWE8ȭ~7(-~LZDb` f%Ox^rÜ%Q|KJYǝ&;02^i{?Y>cҤ>QAuF#?" MSZ##M9DŽRnȹQj Ƹ#aREp$OF{%9 U-ԧl7;/VWwGr}<ř=lFFGVvEp0}ae (V{Χ. csRƒ`gJ1/DVꉱfЁ %P?{ (W SQI2ĵ2}r"F$ ^˜(m6f #z!Uxd!R&R/A ` Xy2&"BFΖ5M?]}b }|1}hÖMrRfΝ/⬾Y󾴩M BR A)S )7F;{]k8HLcblUMUny7]knc,}OQ׻| RЛAךK hj%r~,ZBDk%*j}Z5Բ\0cbv1aKe$"_jY"MC8(;.֙ Car{*͉8~d:|P"&Oǡ5kn#ױŵϷIpiwO[[[t\q\$3,;mٲMKvWM2N7- #*n`j"wTj TJheZޮ̩U)5*}i <+Kx.ICs`SEG)ZPՙZgkyV50fS}MX[BYwyҲEvظ?zm,o?SQ=bTO497uΝ:HkC4(9ġ& L9Pi54)=)=w~8a T l`+'bkK$Wr 5FMeDRe}2}ZCG%eBAx*]4)XOMJ e ui(夊[BS>rzrKA4yFZmr5 d_+H]Eեj3ks?[n7՜%wgIeyŅaJZ,z%kY5xr'ľԠz@QLUIy9/(Qʥ86 *stnhgKlgǏ)KjeVjDMh\uRd*]&1jA̅, Gkfi?ދz>,ʵ\\*d[ǮȴzX(ys$ycfՍj9N=vM#Pr1Uct444JS.PΨyYΫ+}\^}Kn:[+ٯ/uyO3VDzZqה3drBV¹ ;^Ŷ7 Bߊvu^?t"c3# c--5"Xp$DY9fr+2XID$BggdlTiKCwA.cMt{9K2, 7U-'PˑK"Qnd%X#MdidN 8MZ&2x1YuٻÑR+"c/{).BT"00DK!j>BBBBܺQKMa󖔗[  t>Rt .ŲY Ml}jy:Ws<+f^=RVmK.Oz4'VDO9{~ WuI8s XiX zITV{`U,rS&h&G%GmZC[)j!+rF3Z5VQ6LTȘxO<\+DF+k D4Tr" aHwir$x{R0hs7=Lwdg?Kp ο>Ыo4=W`RЉC fSE*e]nJdh;)\k@4xf4ˠaKV'6f0Wb7(ʒ儴-SyI/;D1TԨ$\1:W'm5,",r^}JQҹkɘ]|=];y͑S-S>ةcqK;. 6Lqy2AZҀ}Ā&W"R UFڤ\[̚a%jBp%Uw: MZMc&%> \Y&ԶvUکUٱU`#•c'O$2&6S&-Ii`#4%jM$ 6i;\1zj WpN=zm L5\U&WMZc&pj,~qvZne/ l`j"۫-p\.?WǽA/P۔mS 8:P?DPQ1fYgm7,vfu|Ak >:¡Rg^F()lİM\S!MڷnizRؤtf&zgNY^pq>`X묒*^#L%[8"* Y@z@QL.*@UR^qq iKqlmTlgΖvRQώST[|bDYeM4I&qJdvtƨ1C4M x/pB#(\*d[ǮȴzX(ysnzcfՍj9N[}f/'+w\]Q%]>FGYxH#NCTp@>匚LJ(W_  p㲜;YqӉgS\_oOI+SN+"DrrB=kʩsgU}29!Xt\ /bs!T(lWeh]ce/b|TdlƷy䠱Tayd#V00 &QV V%Z6iggdlHgutӐZl#.^]4ǚxxdȰ226\mnmm$MB𑆆khd lr3)r<\{a0)~i0Hq:<!Z2Ħ97/䙽~\^!8oIya250.GCNAХXQ;+7)CB4PlLUHO)z-p¿SerQ!rL4 HeootCاνݦ[¿xƇ[d;qd&PAHHDC%'-:n/'e(K8}܁QUPC@8SP|5.<-p~E*J#*J.=’.^E$˜ ƺA10S$ lmRY eo!m *-R$CP-gWB[Z"$9U?Ya&;B֖Hjc{,L-Pέjla>)(QCwD\㓋f K r/Nja|\$rPJٙgϴc"ϴ#v<;lkQ(g,KUh} &STju!WXL]yABD8bC.VJ|䘀ۊlu3k\ goo)vu|ȋOy;a7UMTTuM϶PMސZoWo ?;3HiP.Z0[!GlA7ސmVv]gnL}|EaH!\pRmNˍ`ˎo` b"gO.m1i8T &'/td(ګ)6eXL=ic\>l}[e8gdHIPBMZZ"bʙMj#rH@\`*,D|MN.ra3Z_9+3rz $'7%BG'QB4 pdWpϵǥB5_SWKB0[!>z#,\ٻ6n$Ln<UǻvsŹ{. kԒcŕ ǐ4(g\-̀@n֚(>DE#㉉?£i<2Hm.jb'yEؖX>qY"(G" ("D ԯ &&s%MY YR)ADGLPcuUv{ՑJIs2Mi.|^̫3໣(7Uy"3H q?懝. ~v\0sT]q&ɏP4jrxv1Y?q@C28 "PN9甇7é_O`l]QbK!TK6p3Q$7^ຍq8sLU1h/EMG(\Dm= 3<*Qbg,.>1߮+^qz8 1}>/H7>X~On|}t|F@ q I#)pY IH>x-*O h/QYQ#k{ZU[夈#h4deįRPi-!KQDa!{)c0>rWzomv}+ΡK7!~a4OR6v<)Pk@K.=OTp!HNM|#= R=s;n@W]UFиIm|&}RytNtnR.@0s2.HVGTZ8=!ޫ,QuD$jJc(r(N9D%4=Rn# 2G26w@~ k;9n@[K/ΠP@zi`Q~ ^ m)9>mT%ߩEh\ Tˌԣ㼆 C*]SvPAj dzGVAa}!-##1>)'伏T&qC(%Rhp_2&iԃ vSC!ѕ9yd`d2CD)Jn(@x I8%3xC-@rR!Db:Q%,wu&Ϟz#gG])m좊>MFGX]%.6{grMQY:D2+ob2]Xcnمwzzk3.ing"76ω*gfy2u Hu&+uezsNm$ͫYu-KlY6ܺ{Zϫ;ZMz^kfڷ9I-Gu*WbemNmCGE(/O ;;~DvȖ6d[%*ceb > 40 Ll 'Ϟιɳ'?Wn9'OTG :W>|w6M/[mk/z㻕gJ74n5Qgӹ;?j޼J_T+gɼVG8@E >\ER`۫+`sP 7ْԳ(gh|zE9h&kBL>yCI 1R$Kc(=K!qcX%}eNǹ^efG;RErAJ#AY" zkA9"1*9O^QV{oA2&Q)ML $NrT*g8P 8ّ9;:Rgޅ7զ xT[|i0G~<[6;>rʬC܉g0/olz?>ůr[l:h<1˔3Is91Y<%Qʘ ǠX"A#܋Fȼܰ!m&h "(|hˈT ©eܜpRV٫bZ$BBC" +LNLprc(wjb,9A]vioPSʯA\~#'e:*g݁-uSOW6Ljc>Oe"3ů74*^՚aA\X|RTH[ MݠxQ<$S <&vLZsjtJMpr Lg9nkKŨ 0V/3M!( E˟LQ;~CF*2rjjzmjciCf9EjS'!K4+埮D/.{}Uﯔ9 >\SLq;]TU|_U/ן|!`E/wM@X>i[Lt|>ǧ(g"iK˶ˇ_iCu!s;gs-r\c}K5w'9P7BEbs$#8Ȟ$=SM(+)f,2jg׉b G:wEEu:cfn~f:=)55-N}I/6nsܡLS#;gs=Dwݤ˭{w3ml6h=\lou,NW w /(6f"f6bkmwu><涞fڝguWs8>ے5%#ր1 m`}ҔX)$%ǍT'*L?E o/bcgp& ,8Jz"#*% $yNԎB.8 Ee CWz[Տ|i_\rb(ICbI b[z2!©ҍq]`VDp_:jDdR8K<+I¿ ċ!Z//>f;%)$ B9'hTeJ( Ⱥu!-aNwsQ[KInD:ikBFcBbki O}Cj}iP6Npޥ?6ߒ mM;nyA5 ' 4>wS`s֩d4R$*K$.#s^S: +v_^T `h(KrAuQsdP)9D 3։}&G҄o fOYi1Nvz ]NMb'M];zȯ-N (\ΈcIȽ%^HAS2cwVTY.+3.G>L/Dg]/rK8SRwVlț!lS鑻| ˒ \ڠq*.N:/ٙ>1pF@)䥱ܖ9wC?kRw#RAi(B1 qV{0O@s.Zc蜲HͩC u}¦vشlyf9½T ÔuKM15qr%>j2I{q u>$f IB Y  VI4D[3g`̊Zad`hrILd$&>cu4`I&9b#~p-rHlX-_d"bWb' :NbLd`fb\EQ)% __<~ ;<3?=5i\ |sdFQG=(Qn&fяMIi3P)R8WpQU7T¥u\x ;R?T.D7CoStvG߷_@퇽F E@ Ko-%:ԁpfgȾ:Ep!!?`6%½(x|l-Mmˋ 0ܛiH7g:-ԗ*%'?mmڻE͇t4^P|)t=UԋlܢrL,p:-B?^_񷣿>zG #כK%^:*C!]hXZN9eOfJɴB}[^_^ѻtWeQ ~ S <_^vaMB*2[(~pᨮ/cjO*tTJ*hxT/.hd,qHY\|o@|o%ڼͷtQJ-cсSh-eAGL`~DJ'0\0PҠmPV# $0qj:eb\8ll<`EgdDs%// O\ 2e;3= [SyUPin$㗥3|?Lj*yDI(&ڨ K6[=@{#C4b֕+*O :F?Ly 68Hd+J=qTY BS:8@'AwhP4TJF9`RjD$2 RjΑC*W0UМt 鬢?2k;@ԌhiNaTs\F)"qKW3걥 &5++xG9IkQo18:k8(8DRǪ)1&@NʦjI:;iF] |^F̃4 bSyIQ&D$Xi*M.vޕ<eÉ"=ٻG|I^oaL"$5z$jǽHB3PC( 3|m: e, }0eȡ ! F#&,+$T r \`+OS#|1S]6 ٖ:hfߢt:Nՙ<*RVC6$1:R$R8J̲߫s&cM&>Rxix| 0"0bZZY8QA1*<0Z%EՍ +^Jl4g4 )yLx鸗j337qv?٭\wP$׽8Z%hny B ZɃWџ!} U``)1%+h;A=xZAcǁ Kpmd`҄A㉒sbH4P%T8 y Қ)1I S"!&"-,xG#Y=8J 5\?l!/*㭸1۞71#6;C2G j,לW"(>F*om_ǯYMe/,ÛSg .o;9%B;=s>-{f݀>MlO-,.'~N9v֖R`NFS4awvQuƥ0U󭵔a!ϡ>v6tϾJu$p"Rv*T )_Pҁ>lR܅V& heJf+% tu >J9m}RN+t*Ti7`YjLB:I98Nz-sxcpT+sL(@6ȹM>:YA[R.k%!RHDcn[6pֻ݀C +qv2UWlg̅(ZJ}S1HDI Kb+3HyAٌ0*d2HJOlw9 ߦ%|{T6w݉M~z{ r͢q+FLs01gV Q3K0e!A\5aR/@W꼅H>H45D b Fr-D3+Y\WX4ƕW᧛J(~=c]__ߍǀWv\ޫxN"m^>&Njr2z4c`"& Ĩ()J(c s}p,yqpˇVc cpĠa8:|X .b`< ;QL5N ][𷁝X̒3z ؠ6MOq+p1;$\BR@(= {f)4t.x_ts~6#3z=o?56h>Q7%_UGH Bzmpf:EoR'ë+oqok} KGfVoUW1M»낼aKB ^Y" duW}`> cvqT1&qQ?x6z>MNo䌧$0:YgUx٫Սŕc]ՌXٯ ןa+ʏ=_޵>ۗ\`$1:&Xj]\Ok||4x5M^PDνxv4(~gݠ.{^?|<>$D_RQ& ^~ 1. )EG"*YLwH&+qayS"U>U Hˢ"$1N#3e8 g?=UχdΌo(/}%1$x0ؘ1y{1=CtW@Ԙbճðx͜cFnbQXne 0nDQQ9I@eKlE PԒ@1i"XczY^@˃jH 7{׷f W=;RJO2*SZkr)ާ|_g&$]qFXGgQ=(]6_)ۃKn w ?֞Zn@~}nb_hݠ6?L.*YzKh)Y%%}cNM*-\hBR~m0'{Qh3t1kxZxכ<ۖݖUkl]śRYh͝yoо=nqU݆g&`\3SxAԴQB&u;9.ttx4$O$|\^z}W } bvc ƞó%6=nG2K;r߻Z:up6F''OWkTHɶ?3H4ɦEĖvWwWU׽fBL^?z(׺|!lVo4' L QZXckVELAnp":kAt!^jd%zѭg`g>:ԺZܡ:̀ j}L*xnx4p#RDEJJ (6Q󜸧51RHB x,˹*D&mh QX.Ζx~Dzgth!֦jk7{< n[/Wmz8 oM oV;)O( NG)B!)GhV@HȦ@Ip;Գ%ڶ &GgZ%OH:$D#PD jh2Sj(\zl*Kq8|@۠^14(C`:g$j$Ą1ۇ`x RF]ۡK]M[lJc5mВQ8 m1mIA" NQ$ pAOHe i,z$M`xnQ4T܋`I Fr`ur(#u*$fUgV+_NO!sl |596MOw񶺲 ihx :=]XƧ 㭜Ks)g48AaK?n8#;o{k]g#ژ~il|s}>]l0ʈZ\ڏa=o1U7sD^3AHLt$.ah.9{8jӨMAϷhxq;bz{Y?l]vU09uqF#ybs2(Um<'U"6"Q3,on>~? ? &>e0{x6/xI@ L!~*j,U!.q0lW~puD2ɳx"/^Qg: &=d>i=fSsL-Jz̫K.c,+^` &;b&s>-4Bxh/o7_o7Xzm}Je0G9 X4( > Ɏ7r4FA=hQ ]ZZt]Z!WKFavz:ӎ E4Q-r i-ԵD(yD= uyDAPWcucQQ3'3M̃e+ X.XQϓdxfNy mPwP(I!F@v:$X,0ZX+fo^;84 8NJgV35WEON[`d.^lRLA$rQo =)(8:BY NEj"x܂Ed|BB]6Uĩ6)0 @JsW\dX 6@uF}I(w>Hxs*SD+$B*S0II*R 68 4-ᤧ#ql -ni[!mͭA9́pU>qv(a`q~2+;c#`X K wJhH9w9Qը|:)x& runk|?3m]t4.zmk4͖HRODnKL$]Ԃῼ3&(/|tTȬ)}nVA\r:ˇy\4]ӓ*,DՇ8|eV9!6ȓm woða_f>/̈kms7;9b b\\=qkE֊l{I$-0F.o.G=)'%JE@y_Kѵ)W3&i ETߌx$PcrDcrͤsXG=pA&ԑF"$rX,0=Ǣ'QĀjb NP*D,N#Y-yJbAS1pߵ+hV||Dw^@K^nn/Ný[ P=i\،5a-Уfi[kAkf+=,?>m,5˚`o;pms7?qon{?yr.tYƣњfO7w8?W7 llKaL, ߝUq Qs٫ۺlۂj+/]@)wȚ]ͭH7Rk^\7_j]ԊXkaVJ_ ڻDRNy j_E,^{?N$wwƱ>Vs"uL6JL0U^0-d1q .DxІ eF1+X#@'`m"J]>ϕB͕׫w\)?oƋKF//d"vI=^~c7(n!Sskb='9qYu&Im/p_M,ъ*hi7fYjovȷyoݕ]&ˍ#-0/MG?N˔HdDM&Rk!L"-Eq) ǥsi=76t:ic*\YzF]v_.c^7;[ˋz/eY: V4WFOR)pFD IBU6,)Ш)GTN]Q YTc\XFz $Kh,n'tmRlrW}RW٣>^j]j +z4KkZr+K< T.ۓDG$<ғ_)7iPeղI? WHUK n^=&(^/3D҅Cb,P^jCkvǶioU;ǻ<7qTBV7(nZglwbs| ,xs6m9jlk<;u:^vUMnpb/g~e=r:?M1T؋&`8ٕCDEy :y(xMKE} >ut"}D"}"E4{Xe,2G搜qF6yK]2LZ@jN kmvoSh|?>T(+ct;jM,PνBCZ&PVJjD;$S8Ke(f\rܮOQ 8 57_rTÓݗJvLUJ-**(W*NE*Wp$""($c9#af.JMr5@H2d=уG)$X1@JUR*MȤJsblFRÌ]y!- i  ɒqGbpJ=ۉC^\>\ &_9Ni4='( j$THH>%j#qDWI&O"BQٳ,MW)43A'˄¨c"^CRln4 wTcŰcW sms^]k2RwIrXgMC,~ B΋*Hde%$J(!dHjўG&D By~`.e m`bl懍R?sfx,~ya{suDbD:(PDJ@90Ɍ^d@eVMq6@ QZnٌ)Z ΃**yDpQfBs쓎{sbl#C8x=&[g1,ٕ/|Q|k1>HXT qkJB2jLBBBz01șl$S}8/!Rر+?܍(_U=uAp}Z5iE>dp/қ2l4x`D?L2}Ʈi4?Ë++ͯ<{ ?yi'o?)^Ʒ{ W#d~a!7FԁaV+y/( 5xLY*j!r8giB1O:A_F\V#Em%)|&Ϸ寳BlEOźNWGͷ5Rg~:}ss3&i^pޖ8AՏUwqr*=UM KN*vt3=^Y>ߞw"GĮ2F>v" R8vvdgW"⒳EE>wNc&v\8(y3 'QNQYC5/.1&!c5lE[T&,KW"T .ir2@ykPm!v>י},heA쮠=S*e\C$JsNRPG#)b`DXUռFV/2 szωN6DLeQm$K]`-^>d煛۬t$y#Q/ȱ%e VJZЛtAdeUYӂv Ng tFLK&WO$ ރ Pή9V=|zz$zE(ѴгnM2pjU}д?[jQ #:]O~GS624-j^f:E@ **SЉU+1Uiե@-=[ZEZ)v7˝(*7%)6!؉Mt<6ikSPXRw - &e76iRLj1Tb>Y ftcWID38fEE^n)KNI8_מឈfz|/:r˲zY !s6l9Y҈&UB%%s*d[w!UV{l~%B!}tB$b @QDĤ9.NRioYRX;Id3<#KƜ 9X\B>ojŇQFU^܍S5ԺV"ωRRmu ;$ףI Yπp_c[%c2N.*`Y %$U~i! !kmMrzE5`1zҺXt):Q} ]&|J'{XMI#cfC (Xe](TGhOMBw_KͲnGdfɣ^f F Uc`5.d)Q[=жWl;q9x9GDaDM2U X.JM$܋Fg-K7 5!1-OqAY'S VX4mN.^$`; y X-V-tC*D4XYi)@fT l`W  \|8%X$+$57!:i*dL'Bh,K8J(&` AOuL+j+1v\ՠ6AwVrEɸC@zkR )1<&`h&bDb004KTࠤ`gSOP bW90i+XGeұK"\VA@9%E5n,T BI[vvQu$El:ppzFy]իf@!Ѿ&伖( RDjJ UV+$1^ B5Vczh2&།A9u+fĥ"ҌY7IbLHQi 1BN&d߻Y;nL09vJ YA׮z|ou/ ݅L}6=Dd-$t>%‘FiҬd :AJrJ%UA 0 CM`2Έ A9J#HV **e^i C.5}4m2AɣJA&ح.(ӚfpG]`Lt+ѠKP-"V1hFymV D v*"} j ~_5di"YrB,GSZ ( XB(;KSvhC*Kt]H_>&!GunBᡔJ r[O րhg]+u @9@ 1^AB ;i NmB[v#hi cyBP @I&e#kvqAsڏEQILMF"h4( V;䍃"2Ϊ תwPaRB]$:=#dlD(!c 9QbզX5{-6'tI[و$ɁDj2+J' ӣ*^P^QѠM*w(a:J`@MC}%U SLЃ![[T&zv3k xh.1xv9ב2IVӾ` Dz4qu*D6IҢGJ$kPIlQfkЬ(z֚Bq$JVO ]Ș@pa#ҷfN@s8n𠗨-ʡmM1WTs^ 7"Fh8^O:(QJj]T*M@ AjЁFQ-B>A0@=`-TߴUİlEV gXQW6b]@r;XI'y/n LO("YT1"@jQ$Fb1; ;^CU,W (]ڈJ5(ZS{6٫5 3 ƚ&ELI2gݼLFLFja3BBv Zx:-Q秝 v)щj-*;k*>@Pq)8XBVTFZ6BZش =A!W&EυiDF9rTC%#pT{Nqpnކ`ݤ1jʴۆ,KC$`PrAuhlҪ\ >ECtO]Xzꢀ%a4r/O]gF@ޙGF'c5zB菧}YgLD6 f0ޔ!$Akrr:qdO?m}F(ECQ 7u3htN׾YZtZOV'ଖ..ֺ[glCcrɩ4?KeNj~<-[k kwn\j3=:K Ey~_u o| e#ҕtԮJVP`ǝ>tE(mt 5p (*Pj5 ؤˋw-fNkw"i}Z7.pI7S*>Ƿzz:_Q+ s_i g'!2b#/tUXK-zoF>7_jWFhnow|&>|%[f$]u:w]8q֊r~<>nQeihmE} K2`m+F/L)] +97ۃ-k\m*u/ eeõ 沣7ms^# 7]-/>ټws sZ§tq>͗}ˋ[k|I9%qiָ}]| asfX/]ei}tT rfeSv|v߬(ͷѦ^|]ƃRz97==\}|w,v{!E ^zS)owwAY,U몑".CTi݃TOĄx28U^1Ϯ\koauV~1WﭬK?VV[q6Ⱦ"1,_H^}͋0_\.sIk^ I1 `fBI+knHf&>;u4;^N#$1}A `V".ՙY_f偺A 8co:!H40ռ4\ qu6rV*0ιPw; TSxC mP Ji .DpLe4B 0vL S͋sȈMg;ھOGt׽nVVi&CWZE$gq釳h|8_}zGӎrGsˋM<3H m0@*a %9` yxPQX QoA[[xtoS"8ٕ=0.fGʤ4Y妤Z+O0>HxM= "Cd*䟢zjiFBqmKNN(@P`[$Djd шcŁgH \ gTa0MK78GqTFj$O9:MCVZz p\$w0|ե:Շq8[|T~LTָɪ1n ~sV-,,M(c#>]~k~iR4Yw=hYK'TR߮\؛V E@LHAH)LUʴ;1yۓIyTBl85!!Wrm.+C2QPWN41>& 9ɏᶘJT-mJ YƛJƈ©|ON`ȶKS'8YZ,ΉavyjvyQ[SӺ'#2~Cb|p LFhvGpGëAT튑O?$!>:[n(eu$PHk=Tì;`ZƓJ>]Ld=\OG'3,oB2*|h2;5]TYթ:*RQ/q8n8 ow<oEppTtMgOϛz wzE?sh*,窛 e\sḩ,r\w+E/H7>.DU]6*ƪu~AW}4FG\+Ih=ȿ %I \)3L̉śNlD%L(&ֆVvLV"hn]("jU2̵ haBvxxL1E"0J M& W1/nBp@#$Ac6Pg^r{Us~c|Ouw利\w鸴>|a܌|%m(2 T|qv'}IQ6lr%W_?'i܌6܌W˟pqXv-͜ߣ,o ;@V栭sEt=z%%$?7>ϑ*BEseJ_2N)eKB8۠TO}i0FYb6; )U΃9UK`*CQDPv!ݓ x$ )[X1c2b Dk45u&ŌwO٦ǔ i/fyl<,-.9lK7om}2wiG"\i[ͅU ?6얮Sx'E'Z6vƩSbʘYNMBN~PbxJh]BjuY=lxe-4vnVBZ }]7YwyfzM&kn߭ynWtMSaUkUoXfW뻅66/Z7Z> MhCW5^7%\jIvݼYc)47FeނyiwP+-gkDM}4mG N@n_Lh<9Zay/4\&Y.|DHhiuނdK*Ljd<)V2""N #(H8HgLdΎ5&oC+0QR5i0z.vQ09x6t5>6=Ғ_>]_{&rW1&B`QQRRH<`=>Ub0e1o{ph!^0Ct@$@z̐qQP9Mn)i!P<`E*L<ɵAKI DpC,`Mh6.wl١H]';ݦJ?ɻnUZlw(+6,jc=e9 [&_XPk!D$ F iÁ(0Ok2p$6#x&S竓<)Gu)+2F9=acǁS#"y\7nm/ W_<׃>Ot#HA@G)>dbA#PF&X*+f-:xvW8$0mROE5>gIfK9xb.AwqLE(9$Q(L1:cF32xZp۫d혥b<>+~ ·q)ΧU L܌ַ'Z\Rb`{}zV֯g,Q1i'AS1H7ꀍFqCpcݔ?򖎡cB)7j<[֑( - \$JH!!]lt1x? Թ o4ܣ7f3[ٰ<X`}^ tʙ hlwMM2 csRƒ`gJ1/DNꉱ0@icpvP ^Q$x (W SMI2ĵr:I2'!J楍}҄C$M b lv~GKs4'v> őAH*4W4r!(e<%h"P010ˢAb a5;y-Xhm 4wFmTL NS]J7tsZ^_}zMEun,_GH\Z1/m:%aW'PT,LpZ*MunrY`!GP#R(Q( rrN!ńbQTEkuZdJ٧XbH!"ǖ7%"V9o!XN-QBDFaJ} ZJ,ʨç8 j`$ s~$PNbXXȞm:5\ͪ!a1~e/+{բ|U{^苺<*[ԌM !P"Rb1#f\JJ AY- MIC14:o,RZc@MH% | N9L>jK@*$ ,3AH喌٦]2*da6W̲MB_o箹5YuҌFO|KDcQ1P39 |uD')b \׀W0kny !){D%$ItY$Fٻ8n$4eq@ZH K N 4h$;a+4{PaTVOl䭻OYuY)n~|G٥}JaD ۏGG@HRhY"sٶjK@ j!ܴ@H6'r-ZVH |0Vz%+#CK1'ʹXͤ1,8d=:y̺1zҭd2AIؤH sq,& *[}Bo4(k1oZuԚY$a^L+۫~y{,_/}w2*2R.hTȊS볔-$vL=ؖB" >ylvfBظ5Ak'5[Β݊H)K 2XJFb _%5IZ¥&?.Yt*6 D5H^)N`R߁F[HՆ* )D g*J?c:]k^ɾ{oG+8ci\CNP:T$Jq [$(6@Ѓ#´P[σy7Fo,i·x%FyeL +(֑ܲb.!ģ?X+g_&٤'#mȢiȢ[R: QCQ C#hmA-MGXtdMϹk w/'}d [[ȋʛ0y{Vx*ҏt4>|7w>|cdlt'r8?n*}4>b4F>SY.:ʬLYXvrYK,^_;ff/TXE {I%˟LI^L7[{ М騺akiz_4/ߝH0_fƟ9=;Mwh^}30=; jgEv|f˺ dI3Cޢt*`T;B֘: @khh5Vjg5AQC2ٯabu|< dё(Wg+0 ٛ(]&"ƾ+CE<7`e1}'&`I]ZC¹4jPcR0)脡:3 HiU2z#uCW#JK줐s)BᝡgަsDE]|Rjeps ^rHр|ޏx٥_x7 ӇvvRljԚaNL1Ejٰ_ QC^A0P׹ NW,j>7gΔahbw'_4?I<-"/z^F6/CG.H Yd_]_9?Gd |;a%!uK鞌jTU6]U*̀Jue'/EOE<-"VQ?)n}R0p{vzpXԢfF9XlT$&V9m^өMYx.?5}q`+~Vz25Eo O{U714;lOaƵϔY߾b˰)XBQ B[6k'hY՝Z#=p EnBТ5 qi8k?<`3ؠfb#ZU`6X6/jwRM*{!e~kcLE>PŘX|i  `IF1?Y't;T7rnUg ^$hfސ;iMp3G=y}noil==| gusVAJ W+n>x;f=fCkWnQj4ȭh$"voUaMb\tɺ>k#A+H(^΅QeTkބȾ%R6ѧ>^O ^\QxYI>f 3q`Ga(6pCZڙdʙhBϰ(Aya$# \!+5 H H :]U\H1Y0Hn51Ҙ:A@wY2h/%]1+2?[T1uW8$tbkUv-(w,RحA;u6ay%?P3ϪO7JMsOlai+dXRؖȎ@ݘl~jXlwVkvL(~ޞ<ڦCCl% 7Z $%ᐮ'Wϒ :rg7W9F]bLyeU@)n^Xa~wwRVRh;S<|5fQvBgA'{0j 5it3}n1V/;o/Ng^]fc=Q`TOV󹽰IԌƣNW~= bHV, #-F-a.2[Lqx,V+>~O|̃ӛ^O7:e.{WPf1.\FI~6f\6vlӇ X|iBu_dz?^IuP?[vu\pPPeQn# h~0i+:%UK^26}ŃvYHJrW|[~r_fo½WxKu9Tˁ,#A;;|C_PwǛV|ل2󕗌q5qQTw]B,T}}=.e7Gw)(@'$y"`db̢R9Hz&@(@$*{Wл sv&85+vjC;xy)A쁄lʐ#R"uz9ԙ4'6P<.W^F2]lj_b/)~|}Β=E4IHύOk\u  L˕?BD&qfcMO.GQjY#y#裉[T lFy)I WЗD9I8TW<LYP+Qg9{u<>j :ǔ!*8gJp-35Jm2f nӧUdr>~(}.| ?^㶣Rj1ϗ;>o7ݑbJ(J~V[X1u2Ў%JG{SW 0o . 1ʜ>$uiY$ZJI1΀U(MLIٌ,TB JZ)4 F{Ar krRW!Gc{ pK5/7'}7%$+M=[;hnf^۫GO^ɞ6@6&d峒2*;:8H9\r=ި''PyBB mhAf5CC$PT۲.B|'^ċw>Ntqg7Q:G2=hd4>kĨʻb椘d5~Hjv؞U*IoiobBa n֧"__=zTx )Tyy!+ДL6w%<;?J4 {ۂwFF9'c&7W2=/DCPl|uU_ ^WQm^ wJSN Z1Oe-ԫ+k9cD|lyBsI `0BCۖON8;;ܿ![孻%kրD)!qT7b} ~z(T~|QucTR?/"|ӎ#4tVŷz_Wlgc$Wɬ<. ՟xye<{>I}_AA"0ͬ?y >M<GPzu DsĶ?ҽZVl؍?oSoҍ#H ;Ycȏ#}4㫯FzI2cRsM+4bΏLBgz=rnˑtWNmowLR}H#k9Igtt&]fS1/=S|2uss{s MS}!EJĭ dDr4Ɛ0"MJWHft.MIhS؀6(kb^էO>B9$i}r 33'QΌy`. $kuN$řg(I d)TU 8@$%SHD3ȃ:yJ/Bd`ir2+CjS2axpvyj+ϳXuv>ɼ^6CˈS zK𮂘>^-/,Zü^͊LA%Ou15kګL#zhҿDT_K 3"*1hK2Ϡ3.̽ccB1Fn)3菺\H}P`:@EM(B#Lbmd5rvu-tJ@6>vdN>>?0T|;,ܽ瑠J5B~)/S8Q tWmːWJw~%~rH/Ǔ#ز㘸kIR3w/*Inz\&r6ΟdY"g_قggs y@[LKWȾ] 6skys7~HaX|Cկ=|78;jO5h:Wgt7?~_O7՚_ דɠ{|z@9.`[;U>O V9Fkh+}C 97S¦0lZ[ɯ/# Od"FŁ:e2YK %L&ZZ\UP̽3t1ӝ$y-e/"dX!P beAFnAo&v68&VB-Pr*sR*f2w9eQ6!2䊬(%]2Fdp$U@\"{٭[~Q(ƓR?ՈFF56/눉0 1d*D>r/B .pBHbȘ5upƌUdgdqY.U )DPdíD.YBY-WQj'"gF|SҋESuVCU{Ldc=pˣ\D^ͤ4 ɃcB@ xeESy\OFCr8yF>mنQ̣wяh )яkЧX5c2M*B V-e X:bB'!8@ȃh9B)8yv^Yt$4-3 )P IE!&ɇ>3ϬȬz;lh # Do,ݿXyS1q:lX<DʢEMFmydt=ρ e9qea"Ka&c,yDV(fLɠ֞=Ge;&Κv/f_PT: ؀sUdsUFL6:KAy"(!r92WM R{N0'aJȘ&8uf) 3e& sL/UZՠV_yR'͍ RL*FF!elX4J u)3ä 'cݦWW8be_do~|y[AZ) e gZ H33e,)0Y.[[v3LmY\A}0jȡ RfLj3Sx $cwnpXPF(˳s {gs0vPmiȦ^cfTkpwe%mwI;ۜu[+: Qu$+r -wVЫV\cEq*I[[)j)k$D_[`fE̞=3]uX-%$50dR^{AJ)ku3Ľs@@@Iĸ|2Ro[5CF$G_oq-qvNrw+(d[̻gEG>O2j퀏9^=1?{t'tdQݡA9{h (eU'E %Z͙0[*GG h|(N #>KB"Fn!XU+O}v/UK8ۄ6wWox1H|<3SA@W2sE2roqUHDNp137*;Eo/$>M4Xfxm,~g .e9>O~Ùvď^1솠wN^Tci;^_(;U͟>f跛9t2O[6~HfLוxK?nBi聯9σO"QR-#~}g_pbA⪇N̮<̑Χ 0>'B]R_ =4T:FWi[дkKT}?0I6q[m *}7 {L20J:^tUgQSSFkd0FKS"J˸V`-ȱY[⬕1ZuZှߧBr]:(HoQz{zRZ)PB]y&a=#$Ij_SgiCɤ7W[:W7&ӫH+]^ ؝ f3H۱6vR$.:4| E@.N-3oᝓ ;v@@ M<c:t v'/yZ:w,oz:oNuwk3:;@6Lo4߽[3Ol2}`Nrvڐ66wE-䗇zCo4O5}h+uxh$ި_98lp.tZF#w% ˸yOdrLM.;o|ygrrv'SOci6>4 (H(ha]I߳CbsZT2sk|8UqÏ>$b_,@7 ?3i~͟A"1T8u w 7_&}NL㟯gpv}ë ESYZF#Z&̕4 򉟏 SB7$y4Mwa"1)7$7:HúaoFno[E%oOIMq$ms5H,m l'[=}`Rqc,J KHw;Fʁqzm5;ʊȇ56O̙MˁSu6 h9[n-k]AHiY+#4lי1Qh):P ͏X@!I+:(,R.RՀBǔ*Zv,ßqv^R*s4XUCwW$%wW]iBݕVYUx,H+š"իtWe+ȆF|QlWFvo1aE6pj/ 1ENs~ﯮ?]oN7"#zѓ69tǘ);olң64@<YG3PЁfd4_!4A6wT[5+]y+EP"Ju\n]lFV {|z9J]A%-ƌoS+voߔ3Ĝwg&CM >yG~ b1v\xIԔ=5K I3EsdK{mΒZS|&d A(r1rM3g,H/8WbL!@r+((U:) Ih%8zdW ){094Ȳʫ@P,=Eh=\Fp^ת7&K^OY9<&8uf) 3h sL*CjP+/<ϓ;~E'W}-wVWѵc1JCI"~U[fBcB=:㫦"f|kG5<69kݡXG5 :uk,ތJ` /] .iF ӱZw^1>Fsҕ=k; Q$mhӆ֠ /3Ɓ)ٓ,5VkJJE4г1")sh3VKa RCk79HI`4zn3R=4Ke0^DۭqWKRiF`s3\e~>˹-r2 =::- - (N+U)&pk5goɪ؃%.XrNSOE6hP",= {B\&[qgN>qX_(Y`֙ +Ɍ@9"H8B*y$]ud'sI`o㝢LKV,3XJn +YBKY#o cj_fur,h)%ۀ(JVH`$?^ r Բ;6ԲQ>г@([#ۋ#v]K@ꋡT{T\Bڐj[cuHzL20UU@WUpKZe1P!:eF SI,Om1M̕QZƵk@=gԸVޯIKfA~}Zq+,ۥm{'.Vzq0:P(1|X,XZlSqʄ)نؔ t_M5%cm\=:= bW '(Ѧ5x;.ëv[p=BC1HaJeƂ:sapR$Ad:Vwٯ.Oݿ꯫@C@[f?nuWh4֎\bBF+(|L٨P f*)}JB0=ɑkӥ'ӖyYLʈif/)f upr}@r}r:'a0Js&3hxPQt99f%q13 Z* 9b ''#/9&K >J'UK3m`)jȕKYMPm|y=d w//UWS,5N@B0¾^ll]0*-lѰEqh"V:[$IXOK"0c7xw;XLm1)Jg]KɵlAh$ɷɳb=:Bh\g\ ) $ &R7Z38hEY"&ub}B]J\ < CNT1 e p9ZeҵYsJnM#EZ&g+e3 b:@3:.TEsIJƈ*a֞cINbN3SDtfDA0: cdvuN7->?jjPT>_NZWW>GS2phgJ:B,quIH3G EjHʉwPCTSkvW7~]U]]9}]QynL菃Ksj[$]T?6o/ޜ͏oQV\ʼnm8NfmX%`<6"-wׂU#)r$!od0l0ڬfY~B0+W1[ŘÓ|edd$Wj##t2>L!w}6$ 2l/`ҜJԩ%bin987prVQ}lO.9|H5GFI%שvTr5S׃0<=F! ѻ_?ћw?Qf~op ~/s99*3 ?ލs o~%88^747bEY ͸Wq%Nb !}sJx_uq]>ٽZhc6}CdPIT I#ɵvX IH>o*O h.QYQz龝 T~Hn5qmFU *mԜrV9ds (,Z9s/Eb83"g:UO<ջ<W& oP(Ç24A{ ۹pTm/W;nPġq@Mr!<\Qͤ]aF'e]j,GO*w>@x*KDDB)S`EE)Z .z$e4;I/c$cSh{{d幕mkԻheP9̫ߢe%<+m"+}B;UڭK10!t6IU HO=rAfkm[3h&h4}d7^z MZQLևPIJh1bLlN{=ZG*g^Xiɀm(R7'ghٓpJ&fE %Չrܡo<{*FΖ6Oe"$;y|O6V/yz=c;n2뾵{~)V$K)*si23Ώc=+G'9=Grehݺnh#7FzYvj޶ Z1YuʕPXEr>R #/'9KZc#NuU%?ĺVr./!_ 3}aDn_vt1teͮ|u6Dj[w໯]_rMY[FE؍b-yγ}ה`T/,<Od{ݏQO'yw]WK/!'~ oP]{$dĝy[#r?O]].^HEW;E#$K1wKUUoTdCꊲzc#}PwtS1jv_:zV L+1N*f #γ$׫@3(vl~y3+Ivs_hʔP0u,1BZ "})[:KInD:ikBFcBbk˴҄*]1r6tWQlZ"rJk'Z/M6q6ZyUR}8!b \]8]^h@WijT@ rB:UWDeeDexdk߫A{JCbJJ"+pܠhCsMFYJ "K&J!*EPNDS4aI=e98&C6.M+t9c;'9,z|r,1Zx Jse P*Ź09C3{K HY9e.wVTY.+3.G>L/DgxTJ-Ef Qzwr״^ݣ>7MvEoY>e)"ܞK4nTǥ> dz)Wgq6*$7M }yH J}R!`Sƀy}>p)8T56)ԜZW )]1ddnvMah5譓^rOmR3-a̗VQNJ܋ *4yT$G~E!ьKn&&blT4χP-ղ7UO{WEܾѣ:<-ʛ=j+A*c8"eRn KQI42k&R&5FaLL2& hRNr QQR U*Űg슅0 ~=uqp~5-_g=5y&G씘&(cG&h(DJ@hc,zJ PM. 4fUٛPBh<4Q 2@QEK ͝"0DaD,FfD.)!:qɮ( qŵJK|pj4,h!um&!MBy1^Iy^Z>p`w슇0p!X=#7z1.~|ǜEW6S)~n 6jPu>i@H%W^KQ!:PUJ.>!IaBRzhN N!g4QphVZB)2I@| c.|rmƥD7ͺ'~f*_ެu4=VJ I 216! &z[oBUVu Zy3/(oQ-O}nm],l׶Q^̣Ff !Lu(E7.gq Ϛ&2$짳\dn[w!GȟM. B=o&й#KMarQSG(u--,g} {1 =o6|Mveレ#DFFG|wM/ۊ6Ne|?"/yW-n56tŻz5yTu4j|ս4>Dl:5 ۶Hٴ.6H{lȋ./ȍtqy:h?W/Q6k"խȥ?,\݊ZFTJDW~TYoFB6h(aw7öE-jeɕd7iChYI9&[lg^93sf*ǵ ".WHI{\1Mɺ`C7rT"WH[o]!\uQ v@q"̣IO?o)xMMze0剻7C:>M(*ɅK~ 琿ąR*8/QX3z|Zge5M?7wrS«:&BV$A݀g^FWW˺ѬKWMC*(, Y\(!LxOʍVS ׾:udS6Ւ&qf̱Pp]cX$e"ׂ2Rxc!najv)z3qR+JvFW1_ i o\!\uP6BH +UܛOr\!2\uP~ʕ2Lx"W@r^$WZg' @~I 6F 4e$~%׋l+w_uI%7V7&߯Ў΃y=ʬdž; XyS?⛿k|oW`]W GJl_Mo[ |<ƿdlUg4ƒ.=&Koo  P6). ??[e;[KMM+4OOs}˟:Xدg#i l(%T^SɃo |x*[.8ߤ+5| /oQgwnjE_lNW:᝕e7+"8e>4҉TC`Ꮹқ0U1CJ/uT[̴H\!.f! iy )E/W]+k%vM*i*Pđ mD$"ΐXGig2s3! hx4u;|=t6pbr :aL5 #&aD'ado=(]7=8Q%8lqqO2k8o'+=sf)DMLRo12h3\v9H>`J]`|ơ;Tc=XQ2o.X@߮3|%^F5L5`۴_yE܆5+ZjKd\Z]@K\!FRr2m҇eFex=YrU+Ϙi/gԦ5pej dQ#-(\K/F!Rv)*$W,7rR"WHL ) 媃r% ^YW,?Ro+er\rARDhi<+6ԟeBڧ0xBJ[W]+MmA*4f2@I媃re^!ǺB\|+5kWHI{\YJɺB`%+XWHkZ/WHٻ=#R%^ eB% UE܆[+ʦQQJruhS j\!0 \!Tm+Լ #B`n+BZE.W9媋re(/{o ZWrk"N#m W HiiouXvX+o q7KkHk[/WQʕdLJ0Fijr߸\)&H X[\.BZK\uQ4ᢰRo+Zb=6Ix7.߾b7wjU) lhؾ( gx$B,iƞ.8Y妲EVGH^XۡR$z|,z4D\L5R*қj4 ӔcXl3"i(媋re=mw:VQ q7r_GJ{z6rKVNm+fM{ET >QV6t>jEϺ\^z +=+L{#W\!m+TH\0m^j"WHkZo]!e.4re(gʛpd2VAtI<3Uea*r##2QroB"]4 ժsvch:rJrAo$Ho q-EV6u~5J/uQHq q7HZ?DJ[W]+-#B`K+ě+ "%媋re' GkW6u~5~re6'g|q/r]rJճ+Sw+޴\Um8G5ZajeK\^z*hEՀFW3_ h%i\!%\uPd$WXo qEϑ/WHU/W]+C&ZPjI<%$V:j嵈KAC`a1Wzc!2m7R_?!#`K7r"WHZ_Z\I kvPhj\%\uP4I FW*_ iU )5媃r%֧Kڟ+5ZFY )YE2[dOp.B\}+խҘ^:(WVIm}Zj`x|j̛+}8+g#Wd!Qʶv`M?#T W4|aD5ڦVl[Y[Al/WV=Us\!*\!m+4/'W7rBZzJKz\* ^m/T)cB<3g/TMȟB,d"RkYrZIZo"%QAPhťO;lcB}++d\uQ6Zd]!\ic5JohvQ! ?۴wk5Zz ) rIHc]!f2r\YmOro q7HL )媃req'VȺB\+UrZr\䊑rUȐnZ*7} jE܆/AHYjHٲKP)/Wg{:e'Z\!ntUm\!\uP-;*INWf- Q:yNovT"+̛ K/6"ez47 "&6|p"5m+vQ+B`C+ĵZMe )[vt/W+#B`˼+e i:X^:(W0}ʱr*o q7> )UvILۭu568IjK2N0jL!g\/?PW(ycY~q(Jl180~vt_o;M.;+}$"ߒ8xh*2Dx͝xP#2PtE-VwvW,; b Fs%Mܦ9:Gy+lGyk`io_AA)AZnv9Z`G)[]ZK@[o |x*[Zkj 2&]? 0xTB?sW+bgvZ]=mo/Y})=2>ujˈ/,RT렩fJy$W|_\!.f! iE"R^\ђUO-=fFkh3*R2.ruhS uo=+z#W/rF](`UQ~Ȋ+\!ƺBڧv99^^^N$WCA0OM䥇2n +vVf= CUMlyyRj]Ym!2Q&ȤKD$!Ji9et,vLh "L8gR#r 2@h 鼾J[={kh0Q_y]q fP5LYh"24ʑ0E{3hl0_V#KYh>u.|g5R8 P)eYf FDeO4wb(ǘHuj!2 T) ̋p&N]TSeVH0I X&2bkkyBͅY%l30qFfq7yfCxaslnwatX>Nmqqy!2^>ڃS++ ϠJ83"R`cAUQD,:M94 v\kN,Yl*g)UF4r\DYD&YhETk#TYB{pm8vV19ϦE6yۿ'N,yZ6 )S޵#鿢{l^d1E(i,1PR`=BlB)uD~z(G:b슜+R32D뢲hI(5هMði{ly.}z嘍*-231,%D)R>htؠtv"=\ɲX򇹮M@8QZրqK6)$WVш m49:'05Vf٭_bXJ]OE \?6o٤5$,{>9ߪfZpaE\C1:;?śaƔ_Χm;w'e?k;TdR7?lײx*hUZf#!, _fӀј ͊k򓦖Ѣ+93juEEKŦ&L}g,lAJjmugelUfP[(B9'wxspZ?ve&~CNOק7nKQNc0Fx6S&+}*2@Ga<iL1[}n 0T:FU5ںd!ZN wX0U -H[sk՝;N'ybn&ڪVA8,!vq-bͤvE`H(&IdL`>x j}2oSŃ4a L`g;.;b5oQw=7~|ݏ(^Qv,smv׻ג{ zv,C21;I+g@~tkÿ=UPgg-3YhoJa@IJ )R %" f߶To[C۷-~P_ UVH |0Cm,Y 9QD%4' 0Ao |AO0ze-;_6h^f|< AIؤѥdX#FG`1)QFcS&4Ubhr2״KFN+ÊαD"CvI@:SV:ʺ;$e~kV=^i r|U;>f9ns>ζoBm/j) r7F/W)/ojYw8S6`ʚ<92ٿ |˱1i޽bzk_塙^]$ږfq1M o^A*3c*Wڇ9Q:WO !GQ[%H d_ ]$xTlFms ;mˈǷ3 :e`r>%AJ,,v6%N5<b}Az!YI"ԣ0N+$<[T1 96Uhgͺz7fN?'|9ti <(J&ēѸlDE]eJVC4T |Va/,a$jL*ȶJ1'\ #y-e)0QJ PJOxJy;c21: CflhDvZT']Xc+m nj.e~3xۑ_=Qx7+"DNew 9A֡O&')U0Fgٚ%+ 'E]DvuA]]>t@DДoAK*XyeLl +(ܲՑ|bFymEh+0vPICC7=Afձptoؔe!alBM@{鞾T:R:c>^;R\0P:;a: !j8%񘪱⹪Ɗ+j,hPzч`M~ )hcH)4EmmfbE H ̚w*h7QL*D3CnhU-ÉTNL? So췄>~p¹|4ΚeOPB6_*!9БOnMO#*Ob28UOM^TBI bd:*@bDYz]1.ݪ\(&G&^ԣA(jxiynfeo_m?ussc3J-¤JfBҪeF"F줐s)B.7Bhoӓ- oyߵ1Mpn[,g |Oo8|3_SXz1ujNEͩ)Ʈ(R;4'!`+f99+c~N npdt5]Bތg.K4}2 -[&I%^dxk;h6tma֚nӪF7 ޠGuZnht}B%W5Lht<Ƚ]6dz=ٲ^~ݖ fWQ￙:Ƭm//,<3Ogf]-ïjm}>Q;Ԛuz!HůՓA*6)-ԢYh'CJ㻏SBP v$t!Ut60Vz+Ȃ3)ir0Z1@trcV4PtFBZGKp#qV> XmQmfݺ~(Ho1l(o?@z(md:Ofʧj*_W4i!cM(mt +v߹0U&I%hoBdC ds)`n!Ǜ-+KP{(Ms #LnڨEN C2CAMb6AE=-?r(qRVut.@I{ 䲫@Y+,IY D m^٘AeH*%:OZT/K"CW'gXYCJ⪾Tӏ[+v:(#>H,*T(e 䌆(1 s,0Bʘ̮_u`U^&@X/_9K9 k;2LtXjS5ݽt  Gм頬uLS eB\lJxrEQ6+GuQ`2"?2I?k+Iq[Ψf ۊR΋Fa= XVUąxV ;^Q'AH.K-%2fB6L SeUVg!.G;)3kL<ǟ[㭡֋نx²7#̛֢/7: 3F8NkrWG*eG4ϣZѝsY53c>#Lj@FZsqYNGa6xhvN}mJ*ׂBP $%6w*W_x"ׇ*A4ݗҾ_ߧ!S>:uCoD;i`o_)R ^7?J{M.+2z p)pGiqh~{ }+?~^eTƱ<<^=!W0RM[er-a0n $+#Yj7-__J{^ղs(OXIFÏoӏw?|Ӈ~|A*~xrIe݅<|kMs{˦5>MC˷^ >ݿmhw#,&1[ّ_x) sm;~+PO~/?yٻ6&W}JCfK`՗ɐsP$KP" b[gDH6FTn0âb,$J$7@1 \`u6s;rx@8p`-mF )6h*@6¢CwR$Dd;iL砀ZWYtYVua;/#7nsf)\o) (5pnPR'R(0\#K@ ?@zpiBeȳq#EEVRτN:fA7WYJ#.XJ'xfNy mwR(I!FcA!J-+}]k̫o0t '%@3-WE2B{ e:q4lMt )&E& J97ȈD+KL 戠j"`#o4AAkpy9/>[.W&&q>s\ݤ]1ÌNɲ @58+[='^W]ρT^QeD$j2V% P+s K@8q{;7]ld' mk\ ܁/Z;0?rd~| 'x;o:㐿@J69|{ɛ/0M©6l6?fF8$j %)4*zc()S#e-==r93$֎5P y%\tJوVк3o}t G5ɳi9mc][܆KPKq]~.O)OzUwey͇jf8Ȏ14_m $:9OV7LuzQzu p4-tsGSZۙȍMS,^dEhs׹Ofrd} l]?.֛Lt:kOmͲClY`ˢݻ4zk-Cs-7x8_swݘw vZx۬q]r 4~Xsp9cfM]OJWwm6ϊ qmu sX-6!<1<[kot9zԂӼ!8ԓ,@_8TjB@;K[I-'4.a"lL^J#DXmuFD&+BPT9ONQV;.4p'^]+pQ45ބq4mmGb ZQ%E$O+'2\D=q’W=\{ІpDZ$1"ZbLP#cA`;}UźZYm(YFԃ-N`;-KmFu+@J$=@Mͨ(>7>r?-޿9!"Z:6k<g"d,qek# !r*3x!^+B/sOk7t'hW/]NLWJQZ/Jj-=&8dr;8Lևǒ4cXj/j1vVԵ _ܮ-fII6/QOS>%~9==ϥS9`!3vspOw_+)uNӐ]мw>fLQ*"'܆9@8!( O#蹶NᖤsqI+P 0l ~+}S*89yĄ>peLM?0݆;IIP0#2\K9@+MNڏ;}'ShPť[*t^hުd=^âɡesb9Qmܟע868X N"!W!Yjk­\ RpLhWK#g2w)RsM mZ|!עƁM0T*'iM fH$y@!2UmwfKՁz(*1KJ{%ŋ1RFEM{MJy,^H&)jޭs))Hb IJ6\7)!gSG1'?L+Mxj;г5r+l( }0su0?![ U0{fSw qw֋GbK4|=`p 'wJhH9w !9RLi+4Է'z~,+:n \H(]"{zgmPOF+!2p=A.e׫2Pf2ibjr)b.5br->\JA|vy{_o?Ol }wp5=K_yu5n9fkNEjFλvhZXN0؂:My_Hх@K@ Ny#@yRǜDHArEI*8|ʀn*6԰ʭuv+v6:n{=5n_\07~v_Xu[: TܨVf3˳.k =\vݝJ]d=ձKF\Xo'Og=#"Ȩ,VFڨdEj:ÔQZ ek8ooIQ/L΃SM[d}꟣v>ϵj{_X2)6˩\"4'9 tbXd R\' %Np!$^$BN0tC,E뙠D[FJQH8Lr*x *eک=80hZKIr~K(qNcbTkQ" z ȅ^ Ki4^ܻ%3O/C'z֊N:#EQ(R0jF}LA&)ƣ 4JIV$OV_e.Љvq[BE:U<מ;a$*Q+V>cԀZ%P  Xo岃|,$e[RB[ǗKA%v5ȿi'$tw 쌃9gV>/J|&W~z/H&y/Vz̦٫;?r>bN?_CCB۫^YqBur?M]cvB ̟o?vQucj<G4w ȎJ C<1K!a;ʀ@T;{78ghJ[R?spzsd&< H xr1d< ֈ(e$ t,Hi|o"!ofi~}D뷵Ǻ=72;܍|'QU[ Bqam'\&Q*gE^hֲY\q]!ѱ"4%L&/E> Ⱥ Ȫs u*NT6d d9qQTȜ>#AtˀY񒀗$DFY츍Ȉ'#duυ(TI,E-re[m:vK^W+v~;1c.JrA?7 b}m Mr0 OV ǫ/WS!ս5q+gmI5Ve\cTNtNŪeze@沝_zgpLm3GD4#H7B:aSMikf:i"؅ E]ɴdW.%E! =8)< C{BcZ|4r,(t,򰶦';A&e&ВCmaWylˡ^mmy-iY_[d1+r7IQ%:I̩x۷cKK1 ~Gf`٦n>+Aύ&iȤȍ &b$dF+g3C)BB 0R>V[*H7Ӣ6b󑺠A^BԴ$M*bp,eKk1rՇhEd =8\KA?*1&'d %ؖ979-SHFLN)%"Uc688Ǩ >{6JCӂREѱ,SIOZ4Y U|2ݬvppcl6{QG/~ZHXm2 %pl & 7Z&m!0 Q{Y!8u8 %f֖,<8R +ˌEmlhgdzV5I-nq8^)ɟH_wO߼{7_y7i3y 3)p) &h迿'M㭆(Zּ?d\S󖷌*an%=\ք [6 fk;/@xl_))ؚp2Gq$8)GZ}ɦD"!O "#t<,ά7 CJKVsHlX՞ߺpX3Lj3lUW)Ah vV;sRQZb9 d9[pTTP|58*(Ӄw=egavG<Va;/c9MέY^|y@GV HLEfv(D% ff>g IϓU4NW[͈Zq W}e{lBrLMgHFG, CeWm:Aߊ^-at ^)烙pQ Qn b;*˚+$$=AMyLJYgJz)zK|"   M2`va<{]4J͕ ItB+"#t*р@`uHǦi#е8]@nvr"Yĥ --ϣKqsӣ @R ԉ5,D8Q1 n8ż-sjMgoYfdOV>kfzN2g/,-03>_h\\7j[\dz_)V s.e*%A+lSjü3=<dn;G(MD HW6`2g,xY6'2ϙ2,T&%Vtucp[2e1ZA2vpVymjE8Zm:ʝ&KT[[^鿦G}!19H@`!zTJ^ 6kE4!4%}K/Ό)EkqUEMK.5hҀlCWRÅ%GEo PYQMg"xSiZCv+ S?Oo<-[^w_q$3ײ)Km^0[.Zǭ~iƈ\e( *E&=˞W5$ ŰZS2j49FBMTy׈1 dzR(EMsbsΤA dj[j9 Vf IƱ*Bm;{:ί 2Ҷequ폩rd_иhhźb #A0`RT̒46p[S&6 W@,[ȒC!{.r&) Zz-6LfIZC,lMgqC1bIDZVWڼ=ÌC9* >+ (9ǘ/6R2(iEYiGJ$D2dkbbA1hI,N6%0v^-8"Q$$$LԨ1˜\ǽGh )K]Dfu@6qRR7}@t`Dg@(MjMgE"ѫdI&%EY..vqGOZe>8`$q&0>X9WV>Wvv3tkUe{xdr%2:nftٮ556نQ,я/H6~wWݻ985Fh&| ĵ7J6dn.d:]m> +%v;UO@މ@H'8dHuhi#WZ&cH1f9gGh3pi>u:xBf6֐0R [jw>zg|>z ҁ*ѤC2.!ERћ$5ΥpBFϒcVJ FfC&"DJP"#"E\HG<JN W~%^eݘ[/:^GqS'ɗ|:Ьf~ aOE%񖞻 w5L]W/e] fx5/J4k܈ R9i{F9n8hcenlu[=89v3Гܓ yƕ@?NY*fYRY%c%9Yx&0LRVR$ kdrF~KV=AA{6ЄC7h W<,= {lb޴Ee۶$ :Vǃv:}kv\4y9kN~37}vanjtnP~'iw}̈va w>N?.fthPW m&w%^}ҽF 3]Q(KS*@aS&Rp!8fNŽhnI%bJlTuˮ}r 9If! TD9KRұ̽wԽ %c${H5q|2QB9w2c渾v0]ey{HQZzOκz<tRI{cZ'eݳo[s왖DO ]vxN Т+k3mҍY l%.5MhϽOMpGuy9Xt݆%s#(yRiR1ɹ%kca*C_]!b`S:9~WN\aO<[Wt)U,dIw25q_$6g-捔šz'%cAq>r܋_ "r4V py>g !"Bc$J.ʦgw! GnVhurI1juti-JX83q6?0m_#*˅+&[ C"k*Qɫ-ޙp >xK?=pq@6\hRs4q4aלhlNI{ժg{I}GY:c6C*%" C2ȡu K̤jL=}ue8mgJ|AWߺxEfE5zk:v=멲VkBpv328>hMGrv~co&ɜfnD}Ț K &w9߶uci땔sL\17M%\Kt&ւc@/|֩olw]ݾ6W:ηÍSk߯o:5֝.*=6ԏkqqώ ;B4Ϩ\n k}7IϩM0\ s<ʓae{#U)WI W07+X+Uk];rSS^vuz_&>ʇ{V @ԼSopPWAg:i>Տȕ7/3Y1op'p}zv|rOatmv \khnz{|x`/_ sW[j.;^mfW^nㆬNnP ɥaç8y/ۛFC}f;&l>ӻ9 ?]RI-;.5{XzÚ٪+6Dl=8fO$7o;oW*>98Erym7qY?8녹yIMԤ)2{Q| lk,'g*'L׃_J.dyD[IfGq̥8j\0wV۲xHvMo KZ[˭CRZS̒/Z15bͽsYk1S\߿??]g|w;EH:7l0)n7KSBLµ<=k/eF0cs›1;א06T׻0:T&ג&LYәG>g-j#=6GGbyxSxqR: cy;LAԆFTڻ^Sb3>CS#Lc `I@~O޵l$JUx^T(J PyObgHqs;hZ8oLC**ftr=ْU"?„F&Yk19d,-q͒HHI?cC 9k%xIcM(a$/ *,{nA E#H&4k,͋ꅌ,b ənB31'%؞{=ȄP #k)!;x{nGm.ZlGN@F֙/ j{2rIc)gb:l T9|d|\ L2 _PtSgmƧP˜pAYE0hhLަ0ux-U\M* *Aut5pـlK1-bH]TQ;`PcS<0rϮ9c{Ivp opWOʍ-\ 8TȄ6V͢5%-i+o4vl V޷`B]ɳEbAޠբLf Aah4v3o41   c$5KQ TjkM#AI1Θ4 helco6ql A{JLp6)p0GR 0 "-:d";K @;dR BA*,tc[T2/EFc8P-i]In4 nPOFrnkn ҨIb)!9yp3 RjANB/qf߹azx~:۾X;Zp ZȾIwMkjłtI@-D&2x :p<HҗMHU,>Qt52VKա*h`>2:wl6IZ%8(<}X@w۸~r|xo7.y69JVz#+QbK{@#!exC1`}XT@@"QXS|Tc(#\/sEe` p.%s%k6fpG[ tP *$k;g, &匜3Vz+w8CqN 22Z\܌"8΢ 2fJK)2@~Ѓ j hwțEDB5uc<,JD.l  6 ?+dA DM2Z,]+ L[Bb84DFjv*u՛+ *">k,e"En%T㓅 n s,x;Wd5`4P捍½+;c]O07 e#@9b;wv> 4R۰FVLS𻳩f1xjn\xך%Yk8)F? 0cg.J3R.QÁvCPBVZ óTzgȍmns~ѠD1Wd*#H`[ H H,=<LPs շlb@xSy!KlNI%@Zr՟0RE^^V2 -,xCܐE#HOCG 1*8yc EL N ҂/zVf7 ip# Ppd՞ 0*6m,C)0Ċ$u \( 捍,aQ͚Ki*J&"RKEv,&f=8&( |XI5#Z`4&qW*?u+yG*!J ۥ^c!wfI*fW+?ڻcm? #)R+Rv/oDARيYbTVf?èWq< `Q~`-:eUb c(] /}jF0u{Ap;ҩ212y9fcy=kkg-Y yuQä$̕uIY_zV $A "&hG$@/E=Hgn$^ MC$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D I H 0ت!\@5@o;""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "z$Ͼ L[HB7C Jy$X$@oi D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$%cN  lbԌs@`aOV I A@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@Dnj=-[S]e=~Pw*%Ofy'i|=>1K`8"p }ΪJO[n=z‘ӎkh ]{B+!wݕ#rWh<w*q, Zi`ҫA!>w~2 }r #Q~ 5 {;Nd< ~{ZùB8[,3\@;H.p]=?;z5O2\2$'/8盓%ku/w0 _iw~oO53&:pix]Y:ktOg̓a4Lf7KMS4aWn*ڼEtnc[v^)Ʈ":%YժKΉGl+ f"F3~'98k2OgP>R&'rFgkRtqv||Q,dA^f]gB.C4jteUAǹ?oNMpŒ9O~J ே?^cYV;]תުڭ;_+^n?״θh#?׺QuMn^ Ak$:C0XWfA3VX [Q T@cRknig_?vyCKVƬ+˼"2rsw)?\/;{)Mf#&[!n;Yhmukfz~ u l:-^#`sc4"j2T{7r Sq lRhɚ.>M-Ag8d2QmmnC5/E9[Ik4"pIzl^KYGNLAܢoA̜x:o0XӔ 6㵓veMAtOzdÏ.3l B()Y[;S[La0,Bd,3~ȋhW=Ӣ^WENx0.jR1CRf+ʄuM*"Ae+- `y547m7i[ok׻H 9;-z/˭mz1R*e) UjQQ̋rŎ{95>Y4ٞNcG;Rm&7}P@\ADs\rׅ< h°KA7Kt\B`NN-;jdY g (ag1sp=D@8Hd9% F.D+]_G/dN[׊-Zn(Gc@>$mjӦyhh aIqYʸ+})Qˈn!e`n[‡ɮL:-vk8P}W!媋Q…˧}~~eqfU!p]4dUga7r9iKΏm.'!cyy+o_Q 0pʜ/\Z6Fe!yv_NN[vB`2=v231IL)357#nB&抉>2q# 9Y4{! &cR۔&p_7+| f GɺynxBCzǾ^[4ڂ6)\u ܤ5Ūbd!z&VP-;T*r+ؐ !W,|M.,QK#( ԙ% 73g?~PƟG{tw^;Z.?F,+qsɻs C1;c;|L<8FZ.KIYWR/al}'JROۃ.Г3vt?UIyH.u>sMλrL08J!Hlrt^99V+)8Ì.靖k\CQ$V<|FValnYqEcq3慰(9_U0Vr{CN;ЃKdka~T|C:H)Wn]y4R ]OqR\^h,yǭEJr453i+u*&+W]E߫Md'Mm6rL H̃gIE e0nL0ia(^U8bSv-c/k{]؋Z>w·`Qƒq_J%hbH d)(6)< A(hP/^e,V [g\Su&{Ebʄ.%u8+KbIRM~CG{V N7.޺g3la?_}f['GRGlC9՗ QObqT3 +^gG$^W.X _ ,&,efB,B̽VknE 6ݰ[UI9+[*< rsB[0:Pȵ*9I0EZo̜|VvDիvT]חy=U:|Қf">~y*)3©ʹỊ6R%:qS I!ZseJ2$ &w޾f~7}а!`-+ ^*boD9[o^)1Z[̓aM !2>vW/_)I=rIRLOPr#@CC pjk,K-/~O֢ŖdY.a*+T,xxk cgmI 9kY"M᭝c(q)}QmxKI4"μm H\ólD:ۻ sbtAZ+J[T*|I|TARJ6)WL7ѓE47ZYr{΃t3$]=T G*^$Yc1K(7\H')pbI%bvj6r (&x9 SGEdDw9lVJH?B"v5n7Ap)b`M0( {3EuvZJMU1Jdϼv.Ofr(陒03?7le:=ٓ&X{ƌ& ~ս& d.q?~2[hko3袀L U20 TШzݓ \HMg)[FEoҋ^nt[N"7s+xCfu//O?+(Y9auCQ^Pݬ,ό'd@۬< Mz, ^Och?d`2/J'n4nʧ u7yDj$Wą3)X䩿J A;1I| xm Z_#5.j;2<ܡ aR/1"S)V(hA #pH͟+̿V۹E׫- _L/|,=k6 FM9o=jI^/ L\ 4b Y0GͼX.=RD(!c ZZqΌ&W!Dsʺ#q=n}qh<vKtcٗWe^ih$d&(RP<`=^HЇ2ʘ-G GHo[31I ߍ! "`sQZ(&6pطs< YOy;۠4M`0b4U n^-}R(- ;`%JCKI 9( Q`]dIlZ  ge=+YOy~8ÌS>Jcj1zT.7"؅yv=l9O#'ޟ^KkY qS=T:jp)X+" )4u`d<#K<^FsT?sDXU4ф␔ۀRԴe׵Ȇ&x"j!;3jhfXwVe"8*53a+֩J!/Xڂo[A(`ƼA h` x>D3ĔMh/נּ7\>.a,&|)f[>vO\`{6Kj)4p>{Ud1}#ӿ|ۇ.w?[&κuA4;(+n@u6*@>3)[K&uEhsauY^7wz!BG.;mTk<{~1ܩPUWw/ <1yL7Jgut޽0 _u9&:Y-c*|*dؖg0?e1{.ϕ [ 1HM]뾊+6u;zOK 2o5cʣMԨ"[{jBZθӬ}hݺuל/mgVXh(x ҘVmHs5}tQEF2P%s[ў/X~Ƴd3ܶ i0**4 Ql!ZR(\0 XcҶL0|ƅh/z7[_Koc W=2 THf9|ÔϐP|ՂC?ë<34Q=SXNZ&7VQ 3{T_Ea|dQDrL}9v q}֐VV9"J*p3M*˽幡hҺ;N2߽smjoéVZf3(̌VJ#Mp! 暴3䤗cRpPTPXfL$)-PJlB Oc*5 =Jy> kYZfclXhM#;L$]H)A":Jb|Ӌ=DCƷM], 3 锛* BT'z0e,ʠ`NlAs:خ{g'R)9@HdI4; ,S,"BDۏ2cPI P%D} MS'ga/@FVsP"aZ+O0H$eJ kض ?ۿh(c J C4Xq#1.Raug9hYScνy"aǠB-?&ox\-t䚱Ɲ'k~a[uP0FSjg|0}jtҕt1ZXeO !8rh Bxѱ N=_z?(bٵV3JX+0C%JA)cFpa{WE't^ݫUآN[lBrTa9(`ߺpO y6XOSd{&qpKӝ&tY[5_gRNQʩgm4 ϳZ|5!\M_m;9q# &wFgZ_ Λr^1[lk{ӳros$OE~3ЪZWX YT+G~? ʌ$Uѣٚ݋x˩HU\weD)>]u0y=4uJ,٠t/T#6Pnʈi)=~|}vG.]Ց<8gN*ke_7zͫ 헷3׽<" 0ɓW/}*}_o?9yu˓7A88UU ~o>un,7_zsiX^*Ҭɷ6|uebT0](r FAqA / ց}AWf_fQua#@ $a= "s"t C`ȁ %^ 5imlK^Páf{ QD=|JR0@F hM3 Vr AFv[g:UP|c8UPwdg^sS yqiS73sLkP f׀Lp[Q.)C<]l6ߣl0t,*1Kͱ#L!Ֆ[S'8d(#*}E !I%gH e;Q!UP a;-R$WI !a1pnWj,Ն S:e9 f$r8ŤcA]"#Vk \pAx_oy3,h=ݭh+mX_e^0qX; <<`CS"x83ݷzȑDcؖfUu0u}{b%w:m8ÿq! |~-owanh mi=cqNrևŏ`>i"BT[:0$vJ)! 閝^谏putDX/.QXŎrsG #yNy@ o("CRٓ x$ )[X1c2b b5`EϞ#gK]_A&dUn|^'+rJ\/9ڜ;ZM([T!cE#O!Q]יrPu o_p4,3q:t\#r-iE D]gd#9zd?lI뒮T5UO7g~)s~8۩k|Cބ"F9-z͑|jVݯn{P.{T6͋R6D.!I˒[+?l.r.({`Ⓚ. )LS.܅Q,Y.v3Wb,* ;<[ha˱J$:IKq+ytΊO'ggA@( Lgh {˛2MʕE ^7&X06 fcv5\Ï\Ο^IESD0:pUVCDF-\}p W` \%rExC+R WWtOOH՝?2\݉\!՝=\݉JvPpEW]?=V>o`4W<J2rp-\}pElΦ~.7 ~8hƆ"^8o(ca_:#u>dUG8ŭN1:[Ƣc\w8C^蘔 (:Mm4e0<&JjG=tJľ>0?|+WM|V+X)7`;!ݱN$XLjALK 1@"0\ʞ SeЙD%-S 2TID0}:2l"=6QUNn+V S!d*-{UCD%U-\}pLrD$6-۳OҴ9Gi"u=k!? O27&͏RNS@o0m;Fzױ!xi9@q"(1Bss. Yh"GceK>tϏIET{';tFG2a+PSDO/]+TS,,:Z-:9(Grl&o] NY:3Rơ|2+[7͕&̋96nhcV9BM{L=Ũm,6NKic鴱tX:߮8-zJ&@OD3KSQ$j<%*? oG9__m6W ՆjC_m6W ժX(p1׸lšޒ6٭-J;wǹ7K28?1h^3q+FL#jA p| M,Uw{m9v@zl`Mk3I=Pb>H`Lp!X_2M`8VĦfg8zjEyqGu+r]G1Zk XyAĮO욻@ A7ɯztԂ ÁRW.Q \:02o%i[~U.*kQf?`3;1EMKv=g{"◚B7Wha0^| i6[''fd:MC#u|~Ijw8>; |u}Hgmo.PW_]2[\?tJSqK'-0n/U,trR.IK/{u9^Ur^ًozNMa[_FyGUp9' O׿fg095 Sş6.˺6OOctk^Iu@ =yC_|Y԰4tխwfW E:9%O{?ҭhןM%lZ),r:AJH"?,ـrWOO?ƕGՂZ+ͬ{ AqnCf<,ְsWKo[x+2~{r˛ӤJy\mITUqӬfIUhB7~v#W휱wv~YOpku(x TF袊ReK,#[N3Ɂ模oȤxO FE&9 +[b- `89L"ǘ-5zix֙ijE*+r|ى톇(x/ w5A)yY'֏kl>@vgt|;t#rހ5gNQIXۜLJo)nf-͚\@4q@ADCFI@$H@G)ǎυg$7È Fequ4D{Q8& ZDl ha[0 ^09ۘ|H=ߍ^*Vm?JezEġu*f,]<6$L8 80EvC:ź Ùe C#k9Hp(E0'v$e!2l(?elgCmM[Yp~x90EBF8V jR ig9hYScνy"(I1lpq8޷+fa|'ߟd?. )a?)Nc#>*!O|SpUt9x>w\/yR{- %ڥ+A"!+⅁ϓCLg..V $ SD#=RL]L&)e7u|VT"آN[lB),rTvrU<'_i"#0Cr ߹v"\e#ӛ{yuU֗B#\%0Fv%*M.;vMua]`8#OlTܛJYQ<2WQ^%\{I) iM3 Rr AF'dB3Ft(9QF+99x3s[\tعjvlQ-,,H52@ p .;FFQLG2nȃ6m6o yjJ~l̒ 0AsF`h@:8! N4HLCQ{0rHRIp:Roc}HHN*1!75Ff[A*Ssk`F A(& )*XýhM+ 3zɂ+pV |_OA<&W%Fcy&+˘jB @"Z:Jqh61.uXENL*Ƥ )?јI&r7G{H*w^Wρ kEZQ0q"VV޵$_!eg5Ҁ3bbȫ1EyHj{1>ŋ%Y)2IVI`[&KdTUsVZE!G!#VE A1PMak\oͿGz͠a;LہK0շ\uh^o]4Wt.`Ln8` -pAXbLDC3#5noxޝ.ߑ OІ'tE.TDBSsB2e]4%3E0"E :qh|&^cuR̲ ~Eb7| w3X ߥlFWtmb5&i#U(HaFg2 e^rILr\n`AeY,ցKB:LV 1pɕi^ʖj=Y/`pz7{W,u/ܠ)Ze噔RfLo[Ӹ4n܎2t95bA9*oP_"69`0;3# JstHPNJTxec6 sx)]51c+ g/VQP> ]zߺ"L>}uH0 S)y%<ج0 Dbʨ)Ekq@MK-5hlCoSL_)H J 1 C@f@MfN,\˷Twݟ̧: ,޾'ƑO\ˡ`~ցa'f+03 㨼}Gy4c$2C"eOQƈ« RP`XnMX+]5b8AO IzN;bҙ5ht@62V3gj\V}*c! Xxɠ뙎Œ7mYnmϩ}S/ƍ׷7>1b A0`RTLL-)K nY+TV[U(PĞ 66AKe4F`FK9̂ʈ]͜' jW}QWFm>`܄ÌC9+ >+(󖉒9ǘ/6R2` YulVڸ1$b IE aML,h8M1_d$)d\gَ_M,ؙIzc_DQ 8 SoBrb`&jԘe.ޣA\q(.JneE`LI8))6>L:i0"4T|uХZlGۋD..S(ti싋2.\|#O'2`x0Rqp4T),K++Q+k\<.vkyǾx*a(6M؛}UF4Ԥ٦Q,' ُg~4*\5w˚Jn#m4Zul^]iDH %[$K)>y}p~H_uL/ìmA !Fz5u#} tˠ]Ϥ] ūkMY72/A4kܘ OE= V7Q41xy2퉷yiZݹ뎨KvE) yƕ@P9 U"6 KJs`IB Ke,Ag U,Ht{o70kjlgsQt;xJOx3#2+43q=j\p#zTEp>>h=wS=ju JYI74629x]qC?%@!AT$%紏&J.AcgYHޣVdIaKʖak=Svjb ?68Df??aTp"l4-R#f\b9f&(Kbw-Ϧ} a!{~`]cyLr?_bE.IW X@Z""dl›o!|Wf}R\K3Xo4f6'ì#[zj&Os2YVeS ,l]z3o/ћ_z^L/V8\{,BޗT] ˛啤\kD+A Kq^G/ͭ->n"=~g~3`Kԕ3epj:[ytmjђ'ۛp gh1}Њpf͘pf9ND)-,58|e+lSo2O!*"w*Y.eޖy,# Z4_n&m.&-^rVTx"@`G&HN?om A4.7JMbnS̽u6û?ic%!0h|]i^hN2KS[v4t[}s/&'׷ 0>d龟=h$iG"殦t27VGv!qJuڡ$ -nwg\Jxh2x8ԲTyiYԠR m>SNr\@Ff%J#&^XeWUO<%-P!:eFSI,$m JfQZZ kшu q/qm.>ټ&lLXl0{+LoyVꝉq9Nȓcq,zeo )Xt%22SOv=Ǹ3-ѡ$Rm{ ~+`rpgƃCʹt:cm=^eLHG(2hj3ޔ9FZƨ](^c"Y[,'iWKIB톌aah*Si}d~w&0C_vܐR;g\`)^I0^Ed"$ui}&dQ$!rErEg$"KH8W>LAT,P*B'EfZdew[Xvpe@enϒ[K( %Ì)*yTjgm\}a:>E,6 ]E=(ld)RD3G"5 z__/z>Hܔ11Mp,RfJLV,2 %EtV@Q0:iI(KXIEH*`e u)3ä 'c<Y|Jj>W;ه.^Ax;iGDJb0D=5-2R<۠b5 c&vF%]ƇfeC!n)Ut;3't-vL5tVLj) dIRyVh!<ǚujr57:DqfVƛ!M{H ЃnrKsHtUBlrdƚ7 |BPzFO5Z8̍uL5k$Dʇ|uy[`+˳^%dL o8s"% iNU>G+:9۩O]^F;{7־'זNӍ˱'m&onk&uqљؑ,\5'Zy}tz:bv )5hٔkGQZ Us`9ۜlϾXBi`nq N0|kGXTutZkZb>L|` ʃjqXKI|l*gVyäp٧H3mpkHsw?ab91|OAœKy&i}N] V7ݩZ;uخw.VZSh5U1U1{LE]b\B2k{Wd0b.Yv|`W/䎷^L pfzyy֊3*zRw 3JpL0&zWd0("s9`_X{gY)5?f~9.]fn'ķtZ;F ]O^黟_~:?}[ _ڇ7oP⋎e4| oa KWxW~CT>!nzƕ2vqLi="+JW=Xi@^!zW`b\>ZyJV;+z=pOlWd.EW_XYXyWWY•Xઘkz]+3;++J>=$Wdq2,VZ>++ʼA25 \WW,izW%+\VɠvzApv-\)<;\=`},gKpVE+L)W(+T+z+T)\#$!Vڂp% %+ˊjf*pu+E!\rpj;PeFz\i, W(ؔ3ZV @jÎf*逫cĕ!߬+, Q.+WV UWG+ʷ͡N#./gS3Hկ)\/g=nZJ]YoVޝܾ i\;}tPgg" ONzZ_/\}qP&_x;nIrSZ?=i磙g уɺ14^R|I_4NABnSjpyP]N~_գTU@jͷ?NFA!83lqrJ(Bԅj,b'~RHm ..5$i_ ɿQt [ j-golں<=a-𳔱:Q8l˫\Nt)^ ղ޿gD|jW{=%TVH!]Ҏ;j/f*^6puhS2 7Wk&՚jf*u퀫7HAB+ky)]Jj\# 弜݊zv+{י[ ڭY)o6rw+F݃U3 B+rCJ%SxP0j-W(_ɵ\ k~}9P%gWU +Ÿ+TkDq*எW@+Tyq*ŀcĕfZ3YP*W(W+Tkz+Ti#ĕaViR\`- Uat1rb4/W ؒrQ.+W WW ]5,;P6lVwdSWd\puHS.  U P v[QCWK#yAB]QP ڎ]5TIŀcĕRmB,Z=SAa&IjVvkP׀EknEsB1\[C]k Zқ,ˉUQ*k ՚wJk\!P v<4z+TٳE\AsWїAq_f'pY>=*+dd*աbLZ@'i~V}xcv>嗠??qWZ%ʶG77n uiʍ WK/gi}m_gIZT4Cîd*/]5y[ߝ]!k:y}z_G҉`UŌpWyV ? \|ҿfK7sgN{}dsognqϾÚmc^ _O!A7c >< t}Xxg"m7m4 `%M&hIb6.<z ӍESM|-"$t\^-QSՂAtv&,SIq~?=N@ 8*=>4~CnFۘi9g")De;.OeELT3z.^֬[e}.]@w7k75i9B:WWLeeT"9`vؠo1dYcA0.SowquyvYTH2AN(rAuQsdP9yyTbX'UQ)ML Eʜ-bwͿzzb`seTS)Kھ_'N B-7qWtOg '5n]'rhFuPD\1DDH5 ramW*q*LRsF:c;xk#R%䕱V9HM;=P>O.= (u/JCR!`S$in:xYklŇԜZڪ)ԉa)E/(18-"AKb:SeYLlG[g@$Ma޴}oy-/l*aAPCI:Sc-iaݝVdad_PddX~ΙimN a$03,ODLRz"-*e[ n{.2½g!6A DF.Li36 rHm֖gӸbڭՎCZ6=8';L-W|Aё-VJGwS"$w)$֮Ri !E4&&KKI:i;<:ݤ|/Dl~JD2@āOxF &Q& u:X&Sf/lvQ@0ڝ a %Diqc7DXRX.E-$N@$,aW>b\| .mՎCy([<,i-/~F>XkA(V7~h#G?nkXMSZ)e0bm 9j]RU.]& !_g $eIiBR֟iy8&GuJ+S#ԁLǐb3…f`\JhfYϬ/Yw]w`2> ua;yX pTҔ) {uƄg cB+$˭7![ZmTUQidZa>4gg AT{*EuTZ|#޸I]]duly<~l|/;ٝ|PݾZtx֓㑀~R-CQ TBDպ/In*)D0L1El\~"D瑩M]-Elف<[qPI19;τgQ B]d.TZD{Z?{Ƒ,ؑCIm b?B?%%R);be H#Kf*Eȡ1쑼 XL3y$-INYt<' ^CݱT&(!;5:AoEF~Mz~ܢP4@`=Hyڋw?5KXk}6]DXQ\;P+tMA#IߒxJF2le0BI8Y/83J8B2PnKKL.cyhi ze$@A$2yKAc&>@:jWy}Vp<+OB&R&$tMpIkz?a~"@P1}{rўБָgCGk/O:Guq!p^A _΄өG=8/8B.zoH4Xr3ZVhCVE "%dZrt+>W:bE2\ @Qןfyyֲ1X|c33T܈Tq k @$c'rN)$Ȟy;azB j%`B~ag ŮY9޾ٗ}#b |'e3Oв9V(2r~8N 2K"* /j~jz4y'ɴG]>m+y}k??\\㓦 pۧV<3544f)& E33$ͬR$ >kt5̫y3{>EݾEp}sEYuĔtWg U&hdL-uR4tv~2{sl>E͟Vtep<'4@JeEU&WQۻuGQcu;^L$NW~ׂ+l4gC8^8c9XHUr֓뫋.άiC@QN+9"P^nGe>oI8ii-UfذNSLw=~bќko>xڌ8.}A:[wX-5  VBUqXM]H'lwj۴ɍ 7i{wӰ% Z{,e8g&>'Yi[@ұH.:̺ZJE=`5o*-=Y9x>HEc<95+֠R9*utFU UUd:8Xd"TrUKz 8ˣQV;(uL^[cTJyQT*isZe@9ƍ$̩YOY `vdŅ2~}T<--,燂f۶=ԣ_ :zi8>n^HPqATCˁծ,0\vԾWb""G11$9ĜZo$a=mȌI}-"bQu)rW.Im"KO(Ϊ,=x;qḃO/r_}Phd,0n{op z~siz$'C fCO{!0Ծ+մU56)!I4e!c 4M|b6ahp(gn1E;Dc~(|'pRT(Nto[s?L(7D~ͽkXH#] #ڇ:*#M85vb|1ɦA6 y"Fm[XfqGu,]ߌ媈c~ī_E ą j} ӷq,Y!o?pq?/e#1LJTGi.YYqa(%U\wo}]7׷| 8}_oN߽_L0mSSi'Lೡ^Oƛ-Am3rճ f\[Z%Ԉx!GAs H mJn#P8H X % @+vDYEY9o" ME2"9#kg\ qH@83Mv:ѫT\d$'DcRI9rXA,.'Bg:Th [_c΢K=O؎&YYY-wU$5ho\2AfVR)PYM!Gx ?l}@ٺi1t%r.:ͣP3,0^hp#RNA_d%8J+ 8 $i;"A!w :àon^].8H@(eu(o&p4lTT,ɰ ^'9lGDH}AzN9N sJcPIg*KUZLM 5` Z Q=}vBMCB},> [*AN`Y'n}p?.ghx4` GiJ392rĂxB%< X.Az&Ycc8=xN ݁qZJ!U6#J;I{r O FYY- JuvsB)Jo[/FꋝE8-Bv}~~aVmg@Z="CdB&]+NZs!N0 ZC!VK4"Ge3 bs#"$ڜVxq\(&>ఴƚP6>UNJAs8_3imx_TW>6u3ҌHR$l1*JRAZ##QJܧ@.YFvn#$\^.!g {Ct$z̐qQ8sRDBhxKp3"'2R%/%% XZPkwDðqMn Is7-4Mޝ ݭ-z{zI>OS+;[&3 ;`%JCKI 9( Q`\dIle={q:ڔЕ WK (9=acǁS#{Iq݄K{tq1Ty*pp)QXH#A&Xzo4G;PCDjfM(I)EM:z+@6/5.w/J3•ETtY0D:#R3k- :eUMT6O!c5T&w7FGfh<!q\uZ@cD]}Yω0[es׵JQ u4kDZ ]]x`<:*Iv~$6opū'?ceX8_?&cٻ7WMއA7yUxx9t/KDt9hң˿)f@s%Fme1F?)d紓ك_{}O>зof=-]]䲡Ri P߇`QtՓt}[J‚$(AJ(c!|sWofs&+Ӳz5?V!IwuI/s-x i"g\ >q-{azZ^[Ӻ1˼jjI@lyk|fɁ| KJqRUq(Вbcq]-: ʎo+Wa%[.TΩՏ#*;MxQI%]:+yί:.zI U?xWjE,="nƗ^[[[)Mxv y'k[7s3`0|*{A*w5 T+ʧFش3h "oTI]sQ3ƸӬfTh6lj)WszxO/I¦KI@(1y]TQL4Tqe\`ކӌ)~c>G9j灟v)'1Mr([b- `89T@c^#L/a*nlTi1mN[$hV/y^uJ)=ik}G֌k֥$-I.$aCv XჃjRɣ7DXZzˋ[n%#a.rňdih)Y.?`\xƒ(m09'w(lwm!kk1X(&&ZKL:H*0栽|vTv<ztz-L7 LqG(_źKx/z\"o47DNed 'Iٝ䫢WEYd?C6T0՗4$ZEFA;{^zܫ ;sh[k6qDZ(X!l J=N8{Y7ȣ;BΙ]7F=9]98G@N1REmjFZ^_ږ-#?wc ;C%V$E%g-1_qSsPJ,fu ͵֌q.Xvq˅{(|^dD(؝)8'{mՐYq˙lov <(ULj7=^CzıfDžj A3)Mqq#[5n*1m@ԧܨ !wx-'!9;`cG&T鉳N 8hS*-Y(l U͕dĒVzj|u3^~ay&?ߊ5м"3<=Nopמ¶%Ly獢U1b*tѐJAbR qnK +6_Rh4y)z C1)-)ZX c)b ul9)'arFQruI_*m"rܖ3htڄ6O͙3~%*Ք+k6*nnwȷDe1NJ;w|;v|Xݭ͠+An9>8>Ӧ[9Ü#vVް\}_@KuvtϮ~BvE8>iW {ĮvE@ϮȮs[?w۞L鵏al(|B۰PY3 9NxF)&vD09P1J"aUq7Cl3zv@e{?(M4/=gjr\";[jsf?B!Oʜis䰷M'rSK&uiM]cpgF̢fΪn>(ҎMr%t,[*pƴ,pD( EEr_o8bdko-֔B6Vjo˪ oibʾBfwGy`nSSm tj ȉ3ypF2Lg s*[gE$~&śǓYb -͗N/v;gR=[9H^O>.n71֍$ڑ_=aaZ;< X~]x׽z4s_59~ zjF]7WFQG)qrh$ }~scu֏Yb~YyUiqSFwKqzq?\|Q3D3^28UOZrE3\Eտ.CS)vzug@$ Õe ohDhulk uT8 BKeVѸd*"SIXc2Q25 D4VhʍjƳn Wola<ҚUӼT޳YT^2>I6h.{NQ3>&\p_b}7YWZ][ 0LlJa^iGjd^ wE%Kʮn?p利\u8.o0Rjќې%唝Y( %$}d;Eι |,m}7z{Ͻ5 U^F޵5q#Src㎆R6=':$o9nmq-,RWq̐db h{lI3vfVޜ+Go\n{3kΟͷ&L1.`j>Syoӂ>ο.}7[|з{[CYh[i[5j13\\52$Jbl4at":!4C:uc{qo`ut̎U::~96CA9x'FD6ReiR/у022bzds9.!EJe.,rI1x'= -]]Om4kk 7m>rL}!B.3X8޽ݗ14ԺMzs4-.lɥ1gZ6Բun7M{=4yBsWZnRYf}oy\n7кGt|>2aek׷p$/zpԦf5m9zJ ZcP,Um6Ŋ6??[k k,7&RGxbڠ,/Rs|x>ПÞȍ; }t6)CUp.4:k[g k $d83!2$D%Ҩ2J!e2Y'0ȅ3+=!1xUޙ6w&ff:]>-~f7o0'y^.mfW㒂y8;nyav+e-;`g8$ '> h%(8bId,! J*€A"˽0#FmE8FS6݇N0m0+$c]˒V23U[ĔOŵLAIBrJaw@&(z}pEXn!=|V^J_ω2ASwbؾ~De /uz\nl>G <`mVΔQEFr2ųh.k6kxdvyY =|vSBZ0X[H1c(#9Y' Ywoz8?2r|ZOQa4l +P1'ODGA$F=?€#RVYoS5:+A 6"E-q>ėjxQ]Ѝ3q%JRLTLUP 6zNyy09t%#P2jدRY{2c⮲ɹ>"G2;5̟>^>52Q/J''oۏ0x>_>":4*;﷿x٩G)=F2KZVGO}M~ӳax$RdDD?_?VFS.m AKΞ ng y2<Ⱦe'k_u1>F[uk<{[\]ЄkCJx\淮lr5]to˙zN"Eac{qd ZڤW[ϵG>maD@%%x6*plϏH|~>G*8|sܥ {@ʉ̢$Ͷ<9c:+S\ky.5L,*ezU@H`ėQt&{tNH~Q=V}w e~kXքc[oȳv < a֌E)b О7emBty0Y#GQB4s ј fᲰpQHQMRyPf BAUpDĘB!;<wTC)D+o,'#2ŭ6]\[\>&^/ϧ >wJlӧ52mڂ2@&+r&0]U3G9V!al|<,2oK5Bנj3 YhaHYPC.K#,7|Ѓh݃߬>? ̑~D^jDyIte1HvK@-(BJt>'Z9T%%t,mB$ނ6. QwAOf/ (eۖ<98ԓ= m&nji{eDS CĨ8pSp^ ܡB&:Ⱥq]ח4l6ug>%f4QdQ6+Q)6R`I Cj%;Մ\T<&eL *SUu$˕3o[lWN+1TBj FI8oT$ԁ zP$C7ؤ̉ǝ E ]G&+Ylu=٭ˈ s2=f4okRt%@-%K P? 6VF م 1愰"+y)j7VrEbmGj!H0hMW2i}#|qI~qGijڊRHm:5G2.(f^QYT1:֖Wާ`@F,CTL"WrmD@J4wigu2&l}YgL"JdfBhRb4T hBC].K+0qAr*dk@5(9[0&*{TֳjQZ3PE J?..眾U`:җ A[U lR2 .ʀAC0zP*swGN4gaKb6z CٲyI%L)?{WǑdJC]Rv<1+;#O+}HY$afEeFxܞjMF>NIk(FZ+|HC:DlTa)VrLme'+V5L'6:z}%G^׿Q~"V%30-dWV}/;kz#GVD i~0&zkkL"f|0 ZPI+I7$ LbXDu4n62`<35۪J\iȦa'-.^uǚx" n |Ȣ{̉D?:?l2M/o{I;B'R8JԆE"x8$OUgIŜبhke_6:hA@(u&Q7*O9Pbs_KK51ITOs{V*)Bw4yō#|~NOXe# ZȖGZؑfdڤ& G|QF;$k2jpeOeQF=|>{mF(liQR$Ssoݚ9}$)VӞ5 ӟVy3kp̿Ѻ9͙36lh74G0uW@Wh_XJk)ʊ$ƻdo_I@|y0r-ϗ M5{-ʹo x?Y|Sb;[/oQ/8~:bptNih "& Es6s}ss 0uܕ-T˼:7$WzЦ KçbmW=om]uu)tqX怞Au_|Clv97oǦ~17;x|xdۺ\׳[(?pD #,*I{?-wfÉK ue}1QP+nӕ) Q*I=tZv - OQsSVw6PVaH.P,KۚUTjW'vAj.{Q(]m Jc֚vl J{gCy#y֨}k|~'=CuVxgu__],~O^gt<(9[Ҍ7c&*U]$6 \[U=mT2^eڞ2^m*id(LtoƳqJ5̯iNYe)ke6'D1YZ1Xb $!S%Y/ոT8V*SwYLւ^6%8A-[E\QƂC9j.eɈ` 8V1` i$sQ)VXz#YQ$ꢔ"E$p'6Rpq6Ϻnd¹ɏ+ b[\P\nFJbƂб'mUbyahH_I梎YC" zb]T%i#4ފnشп`6 1Nn,P`fd uuv>mF0~a&_+go!i!p,8 S\ (`}n<2{/-`ڊnl &i":"Ue#}6 ,:3N+uNpm\ɤ'r٨p mԇĹk$AtzGB{;PNZ/H>(f/ܱ+~uɆ3g҅]tOu«^-6t-^'g}0?oX G(i)8qCx%-kyZiW''%n_Ev1+ Sf}l39gT.xݫ OR}9m:w"] zX[ õ˃_pyslX{_|4l.8?nzjVWg8|?aH8f=p||O.o^ϠN.NעllI.տ6zӓGGZo)`u;%>QU⸼@-Eyvr\7J$yoԥţOY*JU|-SXV<TT_6w1 X=Bn1j1jc+}H͕:Z,uq zKyebObw'Y WN5{{>7=$xe,*f}%=JͿOy,Z:^4g'W=9w%ykH2 :O1ƫ?V:{׼DpI|)5DBƻݐ|pg2Ykk`NNR]KћuV*K*-1&V)DgMaONG_]0tqA7WkKIL@k Z^gǔŚs_ƪ7}[̝XL54S>B\*P0&h eax{Ml*iNQ- bF Z!?\kjDI_6`l*4QQ7m[ѱl .yNƏJ/U|rttq;֏Rk和Xb* LUદXj!&ZK}.'Bƌilx ml.)J#MZN1*υ}"ܣL̗"5Wօ{sR:xwp-eXh)e)5L)>H r6ք[c_4]V^uQ:p/ofuqAz0zĬMvoڲ  Z!E%% W 3qއ[U֬jjM9ip%9VQ{\ :EtvQ;L9h01D(5% ~X˷ҲcppS ;E[=i~1ۜ*Eg!Քc0VnY.!kˍ1'"cV`&Yt3p!OǪ7.ޥ$6*2eU0ΠV +| 'dbBxT9oy[8CAYT L"z3`yU/}(0],6XЀ,Y]b-`Pb&1a{AxJa YUB] 4U+VIQddz aB+h0o%L), x$AQ*W_TiXDV ,+)5A府D8QaH2z[C 2zO|3.9G f0aN NQVaa?a`;JBƔBL8")(ؙ#9"JAT:*9kL6‚g߃Z@*bo*b8S .p3SQ Tiґ%-{@S()%lGW>3+.g-=))E̾w[T%՟ʦ($ƗEV5gD4D7S=jO\QLKUT+fcDe U5!`Bw 指p3xpG.aLtUg15@WawNU2 VD%gB\V3НMhvٿqXV d*:%`}}6Fm wq+ &rN]U6_-Z) =4)rv)r!`DfyTwUT]v>#{ Ɔt2ρ]Q)ݥZr:?D*)DhPF#={Jc/m)^kFlb4vOim:@L4`M#.P7|LJajKUT -XQڅ>T2<*q; o h 7>K#Zɴu;Zg%cpσZ4!MZC3Ks䚦5F%5Ȭo;Ja`ȏ3La@׿wG* %F_TO}OU3`q@4{6R;v| G/EV1tunI@'QNDN|,Sߏ9Rk0h ۿ(ޭ;,:5tZ6e릾yk5j4:CKc3*,IKgL{-3Ժatr u?C<\ژP;f3&jNqtTxR'Hc^*ȣ5Q%'w~GdGxuվ!u!t6ڇlar);Ap@$Aiy::$k6)ð \0i@C l皁SK=revTSrؑcy%u칏 1% {v'Wu̷KSfDQ_N}{bv&f=U躾ш#au@'n[]5<@:D]O9޴)uTzw?;L|o&"~4/$ԌzL8t߸5)ep|%F[8j#?wELC[K дƦ-9}!tx%U=ފ)i1^1eo` [k]1mISf}F]хy#HW ]1nS]qjZQW!XDIb$bh)VWkU `! ;#%Z+EWLv]1em ^EW) HW ]1n3-iS˜2`p!銀C1b\9XwŔO]M&g_=ۥeamhZD-DžBSRsQ+M(EWDwUFUWkԕ銁]+b+\jJv;s[BU<;FI[75J"hVFR*Q^<:k+&lkGo=5 gFƭĴk+:Mʀ1)7)T{ǔ߫>G ]1pbtŸI-Bem/^EWSIb,IlL+v]1%huF]C$HW(GW]1m6(q*("`X|y2\gi}]WL Z$JW pZFP2huF]el'9Au^>_WLVu*X{W]=2ܥ*]j52V36=%;Ibqޖ֛e/t^uB]9 +,FWI6uŔ)֨dVa]*q|0EM\HlliTeqkgvڽz*& * 8z1E!&/(dk/ - X̀V8f'EWD1,SZP]PW,'HW ]|KU)U]PW%= ^ƾtŸV ]jI2Ju,i0H]1nRtŴ߻.lQ]GWjzb(gv+ʹδ(Sm7^EWeg ]101qTWLϔ[]*x e/UK/UFZ*U*UR]=:Abl.]z2ڥl.LYuB]9lڋ+ubtŴK-UF Z5[J;f2V3w)nkwj6*~IvڽCRO)A'S.ꓖgLk2emX{d%銁ш1b ([kk, QTW@6u5S֨' TW3yL]"`jHcG]1S]1SZh$M c`q!Ib]WLuu!E#i0^`p3dZL]>4_zÎ2\Xx2ڰefsMoѹ銀_cؾtŸi]W3eV]PW'IbtŸNuŔTWkUfnE L4bv;FIQ7NqRQ:_HBc a7')E!&j/ gJî(ҘPx٭eJfShj`BtE1]n2 EWLkɔΪV+rtŸQM?!`UW+U $]7Y׊wŴKe(N[Rl ]1٭+gv+"Ԯ+ Yu]CDA"h<d\9ŘwŔ[OUWufshBfBڀ誐uuZSF WxԶ/]1E)bZokSW]PW*tKOd(M&J.|!w5*Y/Tx\9TcGl/1;$52n-魍\BRtkkZ1.uw!ee[wk:h+vNXq}+EW2XuHJ+Ny[kLj 47+4ςqKӾbJgJ]Cc+ƍN6U?-)jJ ^#r?]10&1b +ZWXu]0eA"`TWLgZ_2]^w3/ ^PFYߦ2ֵ ]YSfDˀ]1.]1-uŔTW+'Xnw0{,d_nj|+.k]h ;JoJ* xYJQȴ^2eZ(ޘgsc=:"R@+^7/|~Ebs}|:_t9geǼ?S;^-a gh&[t޵MEӘq~}~w'NƏs:G]E=f;6?pC|o]\)a{rqx%>˻Oxeoy>/o_ot"A[e?%{9/۳yfq?ܾS~l֙mC7чt=<4 ncץ!tyoW$^0~:g'84 _/>[aٻƍ$)9vd/{ $sPkGl3)RllSfY_UWu2 Zs&) qT;*E #)gĴF{dre1?? ND-gGf=#՗;#]m[޾zrësv)&HhfDT1/DR JA^uh8BJ! 7N:[]}h gln]zeБ2˗2 ~|/0Fj G0x$e٠FF__>Yɒ-pν4] M9vt>OKT`Ղbl3L7xCtQP/ ECGVn|ppGX}Zg 仲 `=gVݣNjw#cѲhwxjT%P.($5ֻ@ WB` π<ӅPh->(ȁ䔉Gq˘fQXJ PS=t9B}A6y*h+ 0!zt mJݘsQ*%" [`t4tܰ2ⲣ}Rj#R \191`S'*лrhߓIUۙ6ƿ= H!>F49BYjBLBEF32%Y"pS9kƙ"#GjvLi’`E-=KbQjGr9{Y%W1ua琥}DzC)ůjHrT:c'u* gt,:e88it)WokE>2S42h!ŴT";h#~V)M't,Vju|IE|Rv1 U2uB鲟Eg꾇o0Bb2褬sZh!%zlF96b"i!PnJ7u++dzHi(a q[O^!4 ^#ThEаd &$DgHT䉦$){i."]bC/E:c&|{GNcBs:D(GS:InQPlУY4xt'K0VHGgXK3A/lX=N]+g#lRTMmf9; Hy#[`v+d]}ͽK7fT@"4 }ɱM`yxeB'1J n`:}ĩq!BA[.JY@&ǗQD ־lSJ-*2ļ@Fƞ_>& f]q&+:E`,sLzʈ[ N!*8.x&y9 ;CKѣY?f]#0htՄߗ5nQaP:&Wi.:HGcph5쌾|N^NhsGLMd)9& r"UbJ3I@QINsF M@PsbƉq. I94K:+r%eӲ7\O$>4CiMS!AzHMEϓt[j$1[ `+=1m,w@RΤLD `^4c錜 H|O/,/IX&iF}&gh Ϝ#4^6H 'VC ysjl2-3 .$IQAYeN#3 s4eg%}Q! _'/G tj~:O;cs+ 90-<Zy`.ds6I*? @F `(#0BF "|<*!Ì[v=֖f/(H 3>-9*1= ר"`} RL#7uP*#-zt8hȀ} vIMwh4{e'5mTPh_1!7[}!QG=}ZW0tS.o{8@ I`s‚KFFG _Q#\g'0LqO #Ĩdb\[cpr<8gDB:Qaʛv댜=)[Cx:sCxw*_dz=_UPpR6_m + Е7<^؝iU?fvG׵޿t^qk/[I+'cL)d GS ק6o l]?.w9.wtl{ѭYu- lYܺۦ^Ϸww{jfɖnǼOi ՝踮?2!x)ae$hgmT@d"E5Ha(pRDnSǑy#Qz!F>YMȏyn;j~"9m\y_qQ\_|]%u X!xby IW:1c,xJ)҈ .DpF8FȼNBßI1$R ^Bg?uwո`\)}.Cv!'e'43 :E5h JGGLwZ-sgyʳ Zgh$~_[ߦ~Csᗅ4~3XzQ׵h+m9ch4LQ,T)C1Κ˭e">i:/j-O~. |1WX]&66.hV|N&t3GB@k\.hZ|9ٯɗ>jk[sǟx쒒F8Nl~f-ʾe'͵G_#5Rz3=V]Ta1{AN.Q Vlr9m]j]`9S_oF1zb#bsjb js(׿rPs7w^͠O«vآM .G^*2r9`@@_6NFCz⛜Yu*Ą [TY[zq2O֮&Mr7M%FN>*ͽYaӻn`V,XsWm$F+HUmHXd,.{=F"ytӸ.tKf0ՅR˞P{׌94$%rP)E&u},)}R ޘ%5dRSJ/(WeK_̈́m5Dx}LL@$k;wGj뎫c<҆U {uT4E'&h*=ƖaPmԵQ9͞DMn5PA&Hyl짞r|_ $%xtS X4%V"'wmm$IW 2"=6flY̓!Z"$n`}#HII&KRѮ[QY'Odq#5 RsRc>Gւ.j1&"'@2h[FԖ9I0HDtNI%%4"H$8c>w6 Mӂ c{`@t\},z5X͡ek~(_ǃ#JófGG_hyhRtzuiQix|K~NsxBB-9<g)Fe3-KӲx#!P֏M?^OZq<40j~nQ5c!Jf:0mzqVX߆ޯ}:)#(fu%˾ؕBG={~9] .p[ϫQzk~BY$< \jo&.i8#MNiTy7;ͯ4^9>rv:?jFYq=,Q>~:m]Yƣ+~=KM-)%!׷t܌6Y,Rk|7ѯeή=nsx>rfs7"Knjuӻ 4F.[MǴБҰxʣK8 l! ͮ亇Z&)0,~,o/x9?#_-m[v%^*XTe۬0D-տ hOQsz$!<ϯ)??<כ>yˇ@s_|xwPn"fjo>8ޱijo47!M6ziWvc !? uDdɾo sM|j#-dTɦD$L"#vCAeVh!%KU{RgXٌ*[S 6jqQ9 sRQXr^Ȝ-y{:t*1U cx_egQ1v0p P05{ee [r P %uyfr aD?##t.Dv(D< ޔ[1)>Y vJ'oh#0*( 1J6:,05zlڒJ6M3C 2kMI¼jn\x)J8,WIH΢Μ [J<{JciR{No_S^S)}#&d2?Gg) UtMXjhF@kH|;/ai&P֤&y$ h:A&5d(-&Y;"FW]ʡ3K(_lE=B;ren?Z+uR++ ' 3J8#i*KlVĤn7k 7Y,ցK\8 θR|6MN.)k{-q<;39gyeVCfY^Ơqgu&δ 51-Y^ *ͼӽAH%x2V9RwȠԝ0(  KC0gĄ%M>{NTpD56'FjZ:%tfRyoAD. +훴-yAy'dw}~喧a0090BɔlVy<(t0fe6IIsqK,5(ҀlC8[BWL_II4rɍ)=2z -+jkܯSIYM"/`8"!䪎ܰeT#\|w19Gk} 'LxcQ5Sq1]dӼIˍּ*Dbuխ"2:Hʘ9 AO %z$;)*PH2Tؚ8mmZB /K. 3Pe9Y/S3o.xb5#3`B)%46-SIqfYZ"n"67< b칈{ lr^FI`F/ U9̂6-#vk܏~<+&fWPѱ-jc˨=j E8 ١Jϒ#9s޲F^#kqR(E!B%lf W"DYс'MX 8$ 2@FuҮ=Ĺk~Qcg 5>ED2"{Dk$ ba6!訌2YlpXh=AEAƬj72`LiAqe4 TRy"qɒF ~|{Ĺ?M=ⴚ+W:[%hE=.9#&I)c΀0h! IHI,C4eKk{\| \ 5:C2nY lY5lC}ѫ,s䃊t~| k7~'9}QJTҧwTiaVc#$`!sĶǪ pt!CrdVZ"Q#j!ŘGƅώf@.%i#:!󰎮9d2 :Sd}1t:Ogp KpSVdQI(GUlB+%˭7![ZUb"sb$ 'DԚ$EH8ڃ) p[VĹ_k7k Ŗηבvp<r{eG_?NT^2sl0Xo,0]vȭmv]|e2Ŭ(Ѵrg/l$O҉`UJ8+4?/J?anҚGwYM(vҚkfdq2)zVk7YIqá~.:kŧeB}hՆ![Ko6tM'/Eh:dShw^ǃw͡yztC-Sgv9M=޽4j&е.wӏ4WRԋ˰TgOæUX뽁"eW$-Gu*Rp&zOmUת}+VuҌ~@2LJO (vz2#" 𽁫"UU0i7 =3\}yۤyۤ[o+jO{W$D7pUr_H+YHXW? \!H+\*j/pUp]r7.k"܍7&8HHddrghAlv'Q8P2BWHWy;.Oxܭ E>mͮ)Nkt N1;䮝g˂لwՂD?q~{zvq 7ͩz}?mAvAC~]AݖoVi4Ey9n^mMr_yZp2 DV)hPvdc s1r3NyȓON.?*F1\"ZQw⏌jMKk jZ ]1ڤ2*+tutMZZci=.{]Z_ZcF %״ς<\R~5tvY0J}AW_-EuݬHnN(lω Le8 eH3{l>!j%p0O'<^Qn>}'ǻ6My㿎oOѩ<ⓣ5Ƹ}?8zgl4٠;'rRݿN?0tpqCGW9E(^}o 㞬:wΆ?f6,@E Drn8 E\ijc?/5L`^} ~!63H>gj*gCqĚ5.}-;LѾ$Vz7|Ct_Ÿk{qd;~o/_]cm~!;i_Rfd]JzQpSP|5-DPB.E7wI[ ɔ\HnUK%|UkUnzeq!Lm9ĽiA5S?lE#Vw˝z/ZPL$$7vD*cFɖV1IumЌv4G ;gѢkMҮ\>|t I5Kc8W[1ևᠢUf뀬-PRJk]$B"s돈 401ИU7T3F+PtRU=QKs[g@x 5{Ꜫf[@@TR4vp#0%cM!:F20T`&?!Kh*gs#Qw* QN!yD!8<@#.DO$~y~ksU 1Wmh^u*L%;KT},vysYUuP iH%5Væ tF{zs|XvIkNֹ[ʱ'Ws̘HHI?̷2shJKgBrKUTRJHQR$DHT8k]6Вj4 I{m͋j#r}HF 9yR 37g ,(R>YD,ڳZaB BQ:K+z o'0L'mTȗ *9ёO|`)ǷX7JNwh [2?- h;oPR6,J2Tbw%>,d./VA-#12q#Ye%.rIi٘GؠMEX m {Ze3@P: RL1, v[]Ye; R52_l`_Q&t$6GtZ, |FEEt(MUlA1n2,Xm#d*zSJi_t#fH57]Ґ?;1m6ߺ DaāŜE' sG!!.A D5t)Ԛ8B`@9|Ȩ T^ׂ7`X*f=f~pvM&!jhX 3 b{PT8xiT җMƫ9D%@ۈقj2AVa1 %OHv9!̃u- %tA\mµdHiD(2!Y!XPti,Ѳ<=K`r@!YF[[x+$@f6|gUXTuaQ% 9աk4=!V1dxaQMZ uY QHkC鶽bo]D]eb:\yS0x5 i,#z ԥ$7mŢuԆHT*Qw%@J80f;@]Ii(ʠva^4Kq[S-Ah[Ry !zbA jhjw,h-]Fp2zv huj  gD $‹2Бx5v"΢$12iU/_Sd@A5qDƣ""LC כEeXEƕp]# BPmʉB?WHYtggMUЭѠTf5áRk4GoMu^I"`#_h! IHu |4|Vs` ֛.26`^yM?t87cڋ~ù+,&P`0u gllfѳq5ePYf1h\ ?k1i3jFq5R 2j31򏓌 @zQF6k80)QC^"$u-ҐQUr#aEtNJdt `%.(H H,@z|!(!=w:T}f=. >]-`E^1H"^S+RP'w l!G!/CE7 F-)~Ba E5FHKCH4pY/xp`\+m,mT11mAiom-?tzPkFQU3T;*\Aj},ޣ~]ےYo5kPqab2u+so9hEuC DσU!-:ZSDàe ̀|12 =pe6\HmhJ7a#U2'Ek%þlp1:hK2h\CC,Z*cQ4k&)EH 8yBeN V3f !KkBE9:W뉟XD;j0BI>ߙ`j?&lzFO^rvvyR>UA & Tҳ0햳?P~xiT4ԄMhELm6{f{qr\J ) 1.c!6(8 XN@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nu{Vu"v=N %'к (C'!: $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:\'PBjqkr%~r'86we"q+$&+N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@':Zӝ@ dߝ@81bV@#N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'9Zo݃zs[M՛ovվz&58s+2.{o\Jĸtƥ>Z]1`VCW 6M{ ̦"vEtŀz.dBW6콺b1] ]%v>[~= ~-th>[Fyg=@jte98]8 nzᵫ_vC~ѕ݁S^[(pH+ZJ[4A.56+kZђwb 6@?][|ٗM St^6lpe|tv@KyuzoL;oO..?z}? KKǿ}Әnſ373^qZy5?]?>?w#iRSJ ʋIn)NpmeqfrTIˁD*oW+~p2?^SsXKPV]&(H%3b0TU= oxR\|7+;0R_9=AӯաOgԱhi5mltP6״ gz7}uT7* 3"\٘HUԭm5eMZgD !\mBW>'P2~9t%wz!Z}$L#aG>c?Zget%[Oi]!ocEZ-NWnI-] ]1`+DWX•)th;]!ֺztE$j5F5.]0E]SNkdr; :\mwbZid"`ccrp!F!F!d5 Q( 7I+ز6- [lKWBW(FL ᕇ+{m)CL Q!IWe fCWW_DtJ[bE+Ec ,h"Z[Mm)!uXd sBCW6-'u+D)HKWT=DWgAW4=]!JZbJ8ƿn %NdO+-?R"~(׎]=Jtuۡgއr8' +uS ~vNWR"]6iEj$]z-&&97索WVUӊvHj 1.mhy?DDDWX7WE4%!J*Zzt%V !`CWWd-]=@R*$gA9 ˺HW\32ՁFAo|bqFcKY>+oOoܨ 8^͝*?,̭RhJk/4G E`t䓻 y/TqbB~z4ϛ]E1x UUQ9g<:O&9Qr㥆bI\Eo06@w/z.x;jxD~~~/0y|[u}X< n=Ō ,z;ݝA-4S]MZ>ߞ]p]Đɰ^F7Iw0Qwۍr6zD^<, :ʴ_GNٗ"'Z:s4EJ^~f]]huu^Y6S_MN{;φ^ #G/f+fgLUP ^xಜ,"&Yx+=]hMbvֲWoRs 4sSWњaBoHe0! "BK#5q[5!U鍡j02$(σ#rCRd*Dc &O T#\bVD,uRhs,5i&(( ?BХQL̆xVY>io=nO9 sJ33·ɏϖP:T J̞Zowwek,u$sF2U0i"xg4;Tlњ1޺ޤD MUrNҁ[Xʢ4t>\dCzj4oo)cAx4z}˽{ZĞ_*`)ediRqj`RTM?xWĔLQ;%8wTpnpTl8^Rx"JC-erTT i?]@_2i;LC@ҎC-)̨O wQںIŒPaƻ2O2&'!K 9uHP*00XhI*wPKIH.pR$YJ6 J Sk{ 1F|ڨ.0tBuS*quf q7\^ Ot/G} F&~|;d{jlv.Ec b]ٚ<21w._e17U*Odo#| eOo{b:f:ެJ#By9{j{cQ"lCL0ϣY̩1̣1,95n5]2XuSuRK#WmCn,:BЮd6ޣPq{.:'2co@p+ߓS Cf&i8\>Ie9lu[fsm]#0FP>O*=8W=Pjfli$xb+Т^L6iI'|+,"i51$[iB*nᆓ5Z)Ah%&jxp%)ERtam+%)L:{~ *"P.8c9yJIL&+O# 09YO$!:S1T&p Lkx d8Zi֒gvntY#IlMH!).QP^b}q#;JGowmfeE#$ $փ/I0$3aCTc>@tG2 \kqq[mTe吃BwXD.Jp{^8⑃g(Ai0}:^-)w]E"jؾ]6/bⲻ~dFQJ8I ;=U.i3?֫j(1y4:[QJO1V!_gw~n/:zwћ8E#պ#1:K Gq*@-BcK"UA7}Ș~^q =PW bOIo\y}@R57.B+ ɳO)|_?g'r0~2I6L¯w`࣪UUs#vZߥ^}&o\$=\ĊB { %lj7m {}^>mr#ٔ0I0 KFe1dT&3U$Hw2΢UZHlXKYiϟu 8KT*[a(6jqQ9t)(,x9s/Ef8Hk:Y.L&-ne|j4&l0ڴX)8;h*R n~D*րBI]zIBGJ5O䡵,TP&lqt$8Δ,Ufu%J'NkH:$MF {iKdSp<' `%Xot5rhath|+-!x_ :A#v7STܕlyCSf'V/Xo~SNӶz/!D-oI$o^IzoޮQS.qKDWS{}2R5OI׾h-QK~>~x;-/vSЂt{Qrr]rW\zviIt_^g¨DIa˼]#ͽٛY«ŘAc*"J|mjhW(vJdJ>_%/^6;[V.X?-d~zc@E-x?/q|vk4d:-M U+-7~X WDoщ=w R\u+xk⽀2լHlT-w~J}^DuXwLi9JjM7iҳrwDwQJ)O@~1Jjۋ&D'\+3/fRܚ\vlLSy- P82 #]1G~%Lg6xb>gta#kl5I [^`>7cg&r杺5WA!odWns6}j8QGѠ'|,X\.-J!G/2`9éoO,y1G*|{]-(, lgi-3"eJ4I9( ҫ F|לTjqP˻Ilyuq]}P*Y֟\gʵ4M\NٓB7j糛D/B5Ϣ) d<4iL/t iGH'fN٨>fRǥ3o ՘( *"'SDrR:I`!XT"! VV=w Dl "4BB%BiRj/v9wzL{u_NPmy_xMJ|lQqh eċ% Zlh `]ÀW * ;Z eh:/6~Lg_5ett,40AGP=Q+/z{g( rA/"ːĺ2xǬTqY_Ļ4~?vJB(ωRqe)IZJ9YMC0Rk B0>x"s!(!,ivMp>} X],⥏ 1 E]+ s`Á DPՙ5,.#ɃC'Xcy$eih8rMwymçzfk9.o̍J4 הqѰ`P~ JsTR.Z ye#,P# ǖqבQSp^P{.aނQ[ń(+E{qUA͋N$bo[ʞGd.V*c95uC]O!UT}:/VQA3W,TIL>1h_$At^2$;W\&VB㵵 pR=9RNwXzA GRR5c5rvkzX.B aЅG/*1.xC=;7mOM^ŀƏǓƣp~b #Q0`R#5$6rW'ͺ˴tW@)T[87+P2c'D\66Qˠe2%D0#K3hl8ZarLSոP+km>hnuYdr T* $ Pg,$]` Yu,*m| !$B Ɋ"5)y6jJivKX$:_L9ag/Kmx2jǡQTֈbЈF1MV[|CZwr#t$cV̈́pFmōf@$#9=RVf5\4(kg"gF"ӫeSuVCόgc!z HKH"rnЋЋ'kqǡPUև%8 QwQ#wz_1K(Qg~tҰя*߫O_HiV!lmD [K\ucib!8O;!N A=dNzIG&,SB82'P$u_xf<"n~e=šEo}ozbc۶ {J rdԑgrTs.6S¦.؈N =d]EeSdUEz9>Tve]vFԷ] )RCTI ml\hd$YVZ&2&=Ŧ_K~ڵP<5qBkVX_g # xV3z6+ = d4"r_J`ȂJKA,Dv!s# klځ<#Itɀ͕69r\e2B|Cq!tDf$>7ڭ4] M%$]} G2>(\_>|)+=3?M)l2DϒFc0S6;( %EN0NjUZq>NZ i@U2]F& hTdYŒ>#3L2%RGqTgjwg/I(0CvM5wmE5}͎ٙ 5?[M7=`<7w?Wۑ+-2^E>4_U' QoFÏˣjt]K>tr$]5-Ҥ-Zg4F/ym+Oo؏_6ރI qj͌  kƗ77lw@ә1g+6ZktV\mcDS%=MLļڰ=0 BX ), ~*HIaRSn")D UXɓ"S"9z")%[+I || dH+ٱURARȵ9 \q:*ҪgWEJz& RRثl^EW7n4.jI'-nqy W_da}t kTm59rL'.L4mCYtM?٠kۙ5jMA!ZHtt,o@(}Hg~pW;G! !zrB/`ʔbh`2i XЙޱ2(\?Mͤ֏kw-lQK5:k<,Oh-l?C?C??sjHQ̙%L&,3(-dL:N[ݖ[RrB Ke)}Ii3r1='83!VuٜA! M*%RF]*&=zLy!RI)H1}g l3]o]^_k.e g@GGN>U4ȹֹ\ڨjʈkӠ54hӂ',6x~[{4 A|9:\!A Y2)CV E-$ȠrAT0aԾET eƔ7>&E$*pəRn02bƹl 3q63Ć. hۧ@4Y73!lvSVjkw;9^ATYr:)-FdR@h0ȍe^Rx R\w:2{UrNTq: 8}>&'ǵ/Ѡʁu]3q6<1nInи8wk%Ϗ"r7S4N#~EY>gi@h޳\Ypqи: .fnt1‘SJXa|&h=9JOCȡGPN*h/m Ü՞$,iae9xYklNԂ[))3 )E\#&rvYVg.L\Q8+Tt3qP> MI0tbUe19 7譓)y)|܆' x-%c<*5Ss1G1Г]4ʍւx)=l NQc$ʶReL̂AO &gz ;Wq.e3q6#cw\3,L3B1 J&lx1zO0[5Yjxwu1p41&`KMtV"Xc)sI[Ϥ%¥q6͈"0=() ¨FI`F/MhU)Mqn]p\cAδc[ԆQz죓p&C+ >KDPty˄RkqyGC?EĐ#K-Ri!D !GH&&$0Aq$ƥ,SzM3q6aˀ`<DL?ED1"{D|Lu33LtTF9wX`=D 4fYNk8cJ#I(hr}0TR\D' C"1"v&jI'jO8g:;Ӓmq;EqG5>Bb2 ph+ex(=@bg?'麱gOb 0i >my^JA6hxE]hVc::$_[rEĚ֐%UOeoR@HBr8@ȭdC;b4M;쬻hvב ]q^DدUcSqx<ù)Ir)KxPT &zބltsY#0,XJˉ}hE\{.1 ;.t@3q6뺺0;lK2_Ň^aЉz5> [-8O[zy|W:ji<*R~WLΘ'jPI Kfg!yo,ȭZJ 5d]g:"Lٚ@:Oe,~J}Cs1\~>`>dp"W09W#kGL]&oHW*\9P0aD?x;3BYkezMO`Ya+^~7߀dzQH*TX檋I`SɃu5X'tssMT ՘yu;hLLTRfYu$~hӻb;gQ[JP8z]X"aorkdb>UW;OjBxGS1V憐p}r0XdUP/8xzL=_yI:>_,/jiq~QԒ]!RҞR=tWiNf[ʡ^HJ[CJ%R8ŒexLϩ^jUeW6xK &!CtR+e"N,Oۈ%f Zƕ-ry֎.ߗ·]EvQ}')ai7Mh\4)VJ _gAv7gfҁj|y2: _;mt/ӏ,IDU0#2zS7wmk5}r/wl5Edgr;^P}j t;rEMU}wDBԲ OhL)5'6z;GqK]0sr6%eZ9vY3Kpٟ^_o%'Rz)6őv%fMO=~US蛌IQ~|\VnTsSJdJŹuGn C<)3 O9jMڒ3fJ@+2"%CSK\3{I|sBXzBc%W/X1P҆R5Ⱦ 3Q%: ٔfQ:6 mZ/lk{ +r[S%Fd\[[&GȤ,IrVԚ8H{!X[MrO{9; tb}3 RF$AYBK7 4$ɩLYFi1N8F2p@"kP 2dOZU#@9k꟯@ZFbTz9dp8tWeϵׁn6xm5YyڋF%#R0$qQ$z ^j E}>1s< [fČY4ZfKHNr(9!#:US=Ԫz?i%ݞv)K1hq[䱁fJVl(,S.bڑQ4tq5xRwHw7^Ex?>("e1X̵T=.83F1:#\ƒ)Fe e/m""DujJmΌR"]#)bM%sizy"iq.m,[FLWQ߄QRˌ!FY2]`}IYjSz)KO'J{q[ ^乜HhhQ˺V[`,Gko_&jqtP" r&&Ttw `Nt=OQ+L. |&yq:t<q]waNX´|?]moT*L)ƦoHFyl5;*Η+KlnE:@C9{KM4ue2"'+sfI(fqE,\PKIhgyp8-OK| \"E:r-*,W}02ƘH&4S!)3 rDD *R4%Hd2!c:À Á' :e)$YD (2kQLCwn0z1l_um2S/~_G'#鷅IJ-//_$uc# -` >VQG3SO+Y:BIG7*SBO\AxtLRۙGO~ #x![n$S,'5Q콁2KL'+) 0,\gO6%L,ARKzRbt6sIG'`)*?{F΁r}nW=R &CKN[$_6ڒ2N,I= [%ڈ]vh臣e2ɐNi4mgB~4Pe&Et}Y1Nm!v^dyfr~1\*lXDtWtC=Ы :ǯc m4,e 8`QS)Xvg߁n] PDr 8)#IN #\/.hq@(w@9kG/D!yuYL-8lkgsinۋ"P9<%se~?m/ ?'Ώm_GGxVbar;\O:]7?`lq;W䖦+uk¿H-i%r'i1nh`qkvo]v$n! xl[5|8G=4 ieC+}/cp=o^w|ϓB>aڝZtƋ Gnrw<[K}\-@\-Z/8Li;Qob3]T)F(u6ƓZ2h>5ݞHn_s)?w /O ;qF1h :ǔew.d:;-p5Jymbf8]Gkxbr>ck:x?rxb),m&,L'ݨ}axFlgѥ/y-?`9FpJ,JHI d+Yiȏ6 G,sQh,!J*} /o+^1כֿ)eg SP|% F$|6\DT4˱W Gq/?υdw>9m^ѵERG9pV9'>/MƵv w ~lv!7?~_/o)7j2>&2cr9my͌_K~IW˒aHdījHe}~mrGt?y%gooy|={ J5ʾ&׾f4-ܠO>vt˗'t m_Ҝ=EoH 6_d%ы5?~.XԧOmZRi'a %'K~-4ݷwwfUpkWUf4H"Z:jJf۵jqqiEUǔ: 4xSk2MKwŮ|l0-o٧{A/$ _FrL`8k榳dBe C{pj@XVpDW9}Bv₻n$).ҞTŲ- eծ[$m?V+6|kܠ+_Pߌy28g:R nږ,di!m\ɿn˫ofI]x=QS Tpu3qsWmMlnH;{j3%xqdzrՕc|K\?.bIZ,NAU>T65𚡅ɛdQUK=hgN@ZQ#6~YrnsW>?4ge9ϖ3 2d%%`F r6*p9/҃ _5Hr" I-3Ș3F1egeLkm2$ςRJ+y3M6.׉jxI6ؾ|ӟVƅU/{OkOTk-czuI6lfqQ̚{dhJA hM^YQye֦eֺsL&N7dg09 Ն3hy)jg2E)3 |2CA.pR" 1H!9RGt6K\V#琔V/wɵOÃ]fc>},QCB ,CBAmfޒ{xDxUZ`8ITgVv+g\I2Ba j:caǢk@'U ;N ֢R61HGoV,hA:ma&q5`$ ̯|JsgD[nƎxCFT ȬqX?J Qq*jf J*$m,LB!J.(/emngDA-JZY78 цD`*gFp$ECd0<!&G;>'ؠ !sQ6%Ɣ!2C$B t"ӭ"իH,VG8c4ik\qs&BP!%;T(2O ;W/V3!6\Bەfrb|g;O;ooi5{yT|'WÓS  ME'ߕWp$?d4.c.SG;njPU)b?!ygfTc 'd ;"rlxz_#$3_-K-䍣 F:#<`ZRJ8Z|Jܱ.lrqr1*zֵ1ډ 4|F1P:K}ݩnSkg9E[uMyxt1c40KGzݏջfn&G:A,or{kI-Xff^̢yKUPzVf}ewtWݭN;u}:̳q#aGiɴqBigPMz!0{hwr;O3yy<lk>6D~v~*}VNɕntW;Weh|NCԩ6ďv.}#*=4VծgB]>w<Rft~x5ك<Dğ'xI%wH,8 Q 5"V3Ȕ>ΆS)N`>j i'zwq2\ژX5eНuPh]e9Skn ]!ZБtQ2hҕ&d42\ɚBW:x(hҕxU 2\*=|(u]=GRRFe\72\0M}*TwJKmi_ʀ5m ]pCWVYϑbQ` ͡ WЦUF+>-,T'CWfá7D72`j;箶C)a;6n+ծCO5U7hpl ]!ZȡUFMKWϐ '>QG0NUgp ^v<$'qձE~%!is'AD ιdalHEnS }Fvd4r"#T'p™"1T삁: \r9Ε<,xvQKc>w"pM,0f^,] Ҭ٬LJU ^·EfEp{]Jr*{Q/L7jЄDD\ュmfw{fg"U_,wxîG% o+Z "uJV9^e]טdj_SiYJ4VZajI0&V&wY߄ds;<䩏UNyiyI9E )PX8z[wiٺ_Z6s!FZ}&DB=P,RuLS:{Nu <^8!4)h B ش Aqm([,@fH3>+0W{UD&V,ɪ{go]^kz^.F3"h!x莱d):$/#QO'$-Xj[}3"m|P694@=4&X1T3QD#'A`Jk-śFt<%VV=S3VCю$dW5ka6NB҃𖢼2 Å H;Lͽr47Z.]R:9e&ipTȬӮH68G7 V&hV?y  嶺Nϓah 1΋ן4;1&ng+z?M U~e} A0-c=N0T=w4('+->_/WsrBQpBR_4"Q=s)#Z?riIi`%pGaAkRBi’U6QKH!] jR 3),⨀ yWrj76ȹ W?͗ڧO|rmfDd Pg詋4ʪTF v !p5oxoO$ dE-rSK*YbmEKˌy:v1o uu5 CA̔uy>]$M8 |$qHɉ !gR(,sTI!hW=98,Kn+MAA?w!]azc<c4 ]x8I8xseӋW??O%-{M/ ?fSY|p>$Sy_yp1 Bq٤K/}TeXL:uy,hoGb.7魯_O0frmy>@󼇥k.6 XR6TTmY^rh6^L f4~s`R__\>e%Oe5`H>QK>`%W[eΆkɗ1d/}wYux{-{aZT&AzCre?KuUYq)K>N^?O8: ʪj~})$~3~4+B֟st4}3(aV- -5u@g0-[x'ֻ+v'ņ+]X7ఘ_X~&'iBb.;| 淪07Juۦ>.eiTb7ǴI(.#)mp]G ߲0OV h̨VJ{f>WM 0Im6\7ܰ76,P=/{Eg_x8)mŖE{]#Ȓ&цݡ֮\榪n.x%píz7B1(,7z,:&u[(1@.(U&2b.Xi]?yZ:j``ZW"hb:@xUbnFU^J?[*@+GD\rf[PbFHLj.?Dj QOFBQ 7 gI/S?)cǤTPXѮZP"ܴ+u?<&YVz43:7*3H mRjÔQBb{+g+T+abHVl:Q /u+yXQ ~"hԮdNFJ3г@3(" (O (c/&wSzjS 4#`]ݽz3˰ C#k9hp(E0'^SHߕfݷҬcJlH!q8  Du.!:3wGe ݬLԘsc>jaE2 [Vί]qS4c;KHVHu4q5Yb=|Qe_שFli nΈn)}z:vSځyK*xHaToJ!~]E9}=+vS <~ӿGOϏ1Q~~v|)w`mjߞKCxT |z3l7q9}B EF|PDe@/Ơ˓YE7ґ.t$@0 B S$p3L̉yH,,!&x-֤ӑ0Wxd"jS2̵b0Z<Ӡ KmʮBѩ ݙN+g:{ڞSݝoo޻3cK'5syDmgaf'p%G̜kΙ24\ES,2eȑ#;lGټs٠XlUcc cG4B-D  Np1PFZ}E !I%3U2(R$a9-R$W 2km:WFr[un{iNYέaI?\B1XPpHAU5ׄFk \i$ԞxL4"0JL& W1/jB O|J/0Zel BlR!ƥh\IYbPwc2TPhL$xX[,oMWϞ kE(2vk҆%Z('rPcaLvy(:oS%ȯJ7v#͟MM+mRu6 [cR;R!FGjd؃Z{1[,} *z vi+ކ,y,@0̦.̔Jއ< VE. <{Yoշl88phkGM,7۳T4nyؖ3?1LZN)6tߪnU¶CZ^u~㴘NpKpw@(^ܦ*{<*}:EMM7#l4]s}߾n%lmfIB;TtEfF)-%:+etp<:գ58 %, IN :,0: bm+^-|Y7R-/H74UVu.O?)[Vu~Xy;͇jf6<3ME|aٛY|Vf^m/,{b4,*LFSK-˭?BӻpWȇ򢍄.*Rm9@ʜIڟR6' NQBߞ.[tar<]n5%QEE,*(Y':Z+S% A'qvT.C0! [X1c2b Dk45 i|-6∹0>eX(Q<\3(P24EuHjZ}5AjЍYETnmQY1$n'a{|/ |ػ6$Wڀq/"w@;}0҆$eG{~Q(R"1⌆5UO?U]].DZgf"!HQW"&'!LEЫOKcl?_ 7~0ٷd¾gZ=Y9N>/-Iz~ݧ\ ~:c?[^Nf|X ˃ )Z!y>5R0g(D ģgվ˫̮79ԑ=郯;5c˶ќ[7ۚMYC+n6HU,0xM׷xvY`m{kUGC֖^1CE#!mHJ׌|lnR{߲7Y:-ջۮƱ]}sœo+w>㛛g\ya}<[V[?T/n6a}muWWW}Edu`5o hhscKb*;$&XKLSBRcqwG8Sg;ءGQD(=d&1JB "S*:*/(f˧Ncq0.)*J )"uR)ΠW= /uL-Sf4:;Vs;5d|gumGՓs.=-D}voL^a4AQtC&F$^edBliЁhO #vRqpA,yY'R"jlXJh瓭1)EF+s2>h2"9 b} ؏ƗB|6|vI(O웘;ݳU{BfcqdaHHPI"1.k|MW%A YKUЩG ޙs|Sv:z PD*كXEz6ﺲbWl'y1ٱ1^s3-,} _%|=S2gJtdG/}94~f ώN1uWGDyr-b)1$hE}=oD&gΡ<8Nd06hbtw6b TBD9r%3<$m%8OdDg@1qS{nEm|:r6L@bǰ@9-b8?xM''w85yzg74~8B"0+hLPl'dȬ <~NJ3 ([[>b !(̖&e fh82FR'٩w'ܙ8uǏ*0 "v"e=">6kb%V$,h!CEB#x ,C.+>)vfM'!*V,dU=SeH!&2I[Ph[7"v&fD|ydQbiɮ:Ebj|,0CN ,D *LJSa{pj«Vkz\<. vnxxIl=g?NP*0ɩ:{*}rXZs}rR}r(}rn5#k ֯ZpBi \HJpTpp5]OX`dઊ+O;\U)+d)*+ w*pUEqpU<2gWjirX]vd U%o,1,u^wwe\TŞgozw7xiÆyWu'GP) ~0A l &wױ`h?͏D[Q?xw|F c`̹Q֨;5ddt+g:w$d5v|RjחH^gݱHq'DF Zs*JWe)' SP UUשS+Vz_hR`\ڜNk=bi-ұUzʀR\UTqdvXZppUD[+o>iǶnj1_vDViOn58)Okt7x3X9񎍴(@|fZlQ`k} Ems* DҮ#`!0⋵s+:|s3Z y_ت~oxxݸ}3\7?onϑϮ ݷۮWW7RG xRȺ, J&) @QZx rK2HF}_JER=;&+ݟ&_t>_N/?ÿ榔,Xɒ}|um_bLrwճV,wU}ɼ:|݅f -u\CT24վI)؇+m*<D;wЂROdY_9۱՚b;ƅ(TYЦ ZgU/Iey <)u6)vЄdDPtglYzGSz+;DZY$ŅG~3뿢\W109?z %z_k7x) &9ʁ=:R̊fo5i+V LSVJDdBXZ0&PjO)Q"fI]yLH;M%ŕ ( ѢlceeҹcAJDš!ecvvF;#(Ҟ"\PĶ2#&\h }rհUO:ai~nc)әT_Y9Z- g5[ Z)֢ˊ$0XIr*0'lȃPp]Sldjkt`|f(K-;FGf(%,aOBJDZg)*?ȄҨAEր̑|$\Q`*F)QDݝ!X0(uL}DNײNWk|RSsv?}J)+dn䣧E1Yunfݛnns ~׸M|>ptګͳ/ r$fI-J?7_6ƛqo~{>x"t;=qٍwAQa0m*]T.(L6ƆΠW= }n-E~P?AUCkY]7?ŝ;Թ.=M0*Kp~z `KYJyNQP8IiѤࢧԅ?$'GAw6-gޅy0髬F9xcGIY)\CYFy盠jR0 %VO#iDwHZ͈VaIk9bR/xDVa&De):LZ]iRuR3)5^X|-R D,9`V)TjOx/W2yE%B 6K1MNjjm=+#VZ$zhd$ ́}~O˴ WE1Kjl\Y7u"eu"*4#à ]AZ|[2J4(2WwQRϩQ<NgQu <{⇟^?= ug~z_̂ꨍLw#@݁φ_wzIqh*Ь˧'|qe2r*.}B9M\६ QPf]U| ]_l[:~{$@0 H#@gx&X8l0kA&K咗 t5P#QAgQsa6 6&x晆]ԦTY*ڟtrSPO;C >gߓ#uɁgvm{O2iD0{<=?ӉZ})JZbiIRɉ2\Xb P .sK#ʅQ1"S⃋b{Va0$Lm;¤2Xm%Z  Np1PmLa䐤XtIM ;Q!%"a:-R$WiDp{ȹyX&Yӆi[3 A(& %2R, p&4ZJ|t? $ģg .Qb4(_0ay H9%(ZeB]+C"ƥh\2[bZ3bL'#E)c. D4V_ʽ:ۯ:c?؀A^]Mj^L-(U^7ia5H`RGSJm@9lcN&zG} ^1OީrVi~>)rn)3K\!O1F: lp6H9HysQ bN憱!#1ۮa_ }mR lҒVJ,[&Ԃpw#iKlw13NtsZo{k[ڕ//=z ń2TV/H"ֹОklNc=1st-'@'+K XTQqOtV( Jă$N(+#9I`B<TJj S띱Vc&yehj iIe:#8b.e6; )S}{Yor 8@h!]ߋwv 7ȍ'ק٫EXǓ,:ͩ(q1ua8 Jtjr>5> (DVLn#dEJ*thBSe:Dh ;{`ƎpRg{]oLx(}Rfr.+egoJw @]H^fUfV 5w?}*>D-4TDO= zeZ#wPe, K8 leYa.okXEPb"|{kh6.#ZޣAllSvzq}\4:xso6W\pVlɐ;WY;v$SZ3[I+ D>1i[[6ӎEsh]>jWw9*ntD^}6e-[wv~͝>;r;?L-5n?Z<.< yôZn:l}Yמzߜ5oz[h`.|z0K0,m^rUz K+HIv͙<7һUZ8}{#Y%z_,E=ifW3XVWoWuu Qs)cr0>YOU0'oNI>~U&Y:o.1:"X#bOg'1Dٓ;i=}2 5zʁ>}aiKILH6TTGzUJȻrdRCtijKb# I%@Oe5`jh%0ͷ'ٛ#jjEjTq鹖{Cf<"mpgɳd)xdFZTIue&d6hd,ZuG'F/A0/Lr4$ʩ`V:d L KYK wB,9Й[>ewb]]*U$\sXLx,{}E٠`[M> Pۮ!=W\W\Xn-UBŤ5>Κ_me(%2$K/ҵY.$ 8{R={KzZV ͍O̚I*v*BB2HB"$/Xϐ2CۈlŷKհtgk"Ӈd)k3UᎾ-$Wj{)=)`q$6e-rCj뎃ILjZ7g&Vev !wzʫؤiM6ա1ּӭ}}›¤ݛ53`p}YD~7䪎QFacUP*&KQbytGUd* U\b1,ػvsZ(\E;P<]WrLɖj1i!"N%bΥ!b"ǘD%t8?g;;?Y(4P# 8(uh׈0`Xmj9&r6ȹQjY$cZlZ ] ;#f| Z#m}D]P|S! ߍ壚D<<eeuUIX u A ;A礌9Y9®NVyXș4W<9c$Fo`8 *FQXaTNZs)d@, hm:0^Z+Lk*}M[T!a=+v-/ "110rjCZ0z”:l }G% w**$ j`$ s_AH$*Ű[)r jg,|ZBBg|C:/]yj~Tj-j $UDJQ9fx,\IIA5(˹ &Zw[T PJB))&zKGm 쾽S j0I!}"Y$}44vw7ӋSb@fQe˶&eFfI0pdU;j^Iup3qVÑ_ڨqoZZD"nE|hR62T> /C K䤖.ARjLfMLGXdn5fJev& FXY\fV`!bLV⬶3Ou`8*"O{VɦvEvW|d6IX2ZRxd΄ )BT׼vmŽVǦP7%؂ [QwVw+w 0Y G'lO?ɃE1 ?̵PB jlR|K!)~B oBO FlEv^3ta!JZxB9xFQ*Ygֽ:̺/ykǡ襵8{?s(LS|9#\LY 9j^Łh(JA Vd\Xy9'eL;9Tuw R]w]% T͹+#TBXTF&|'$PP@?I[] qRFmX_gnF$RQ)^x ɀт)ݸJ*0.5A)l4d-V M RمL8)>J yFн ks娰K∫GLO#;|jN/b^VBb~)?1w^r$Y_Tjy kk+ S0LP:RT(b)R̀Lsv<^w={$ԩg4)dT ef#4n1\gi m`╕Jr5܂,(lc+,PΖ l9Y3qԳQ~jvՂʷCQIJPAGl k :Ȝ&TI%!僦`tW*^Nx "K[ ۥ F!9-غyO)!JQ*y /ӺCfP+wyBy)dq!XL Cjfl!YՇ\P4 pUgfkb;Oy]|ߓ j֎-COS!:EH)Fg Zv&Pjg4յݎvqz75-}S5#y%raTA&l0IK$ܲWY3Va? 8ƿ߽ŗ!^+(*ϧgߞ$0 _j9FFOWg|Ӥ[~OIidz1Jf@dW&!G8w?Flz;fy/<>Ln_hwMMlnqdI@P~^g@p?aD~եg =̏|j4(tyܸ2"jhn&jFo]iT;T$ qSfҼN@: '%cmH8den:8XVfe` pUG:AtL^[c c*DDV%Lb)BcQЁ0Z9GVqq>ֽfqKĬ[_MPo7PI8P*s⢀C JU=CŚS/^tqQ7رz\=NZݘI~+s%tꅒB*c\쾛* cBy@j `k>sGW,%<1W$N8?:`ԟpb2τ$/Nk~Q Qg/''L3=7|-90(s'X_o̕T(Tw|Nb!<ї/Η Չ'Uwk?i͍>O55r1* zgSjx/)JDc$ݰ=ǀ?{`=?(Wsz`=Zz`=?͡? P ̠}8$7# y9z[m~9; yK1^fCO=No!n PD-Fge9n5gb5o Zyu( 'F Pt15"_IŗfQz!K"ι~۽cFG,>S(#!+tr /̮bc1II*k3:o&4HTLIJ2*Iv&.hf2;,|hgWV*i^Y Ygk[eф@rdL`XϚv;_֫_lfuW-:*" !xV J:ȓMAbM@Qل*$xV4|*R?7k0Bdi6e0F1DoX) EL,}w/O4K;L|5q;\_C%g}ߓ j֎-COSi$:EH)Fg DU׶v;",YjOt_7[C=W"Fh()6 D-[%3,)kN=`l_v4[!jj{Vp#a2̇\G|DZwF7"IGlu\42e5* *؄!Ue%'js(ɪ{}*>wXLa}4OzzAS\A-RJ:#yFҙtnK&P54 B >+ ajbjQL5H/ ǵcJ:pˡw>L&43aJ40'xgsyew3QtsdnGdNcDW>yr']JE6(AƜOK1gn%c¾r[pG)7%4u*:>13my[J8Z2d *(mRE )u~MQfO8,@pίc θ~ՏK,}K,nF˧bl~d% Sq2HH9TbFRu|%9+h]+Og)[<>@)̎( {-$?~:Ggoh5?6p% Xԕ&DN}*W?}F']td,(RD_k%"@%/XjArJI P^2!*MR[+|BJH -8T\SPJgβ EIj%ΚaaM)_um}$DQȮ(TӍ!X= )f$ դ-x3OhzO *zv%Yr1JU3%H' L*8CmӕeDQ\ Jdš6֕Vh\:t,R! ]Y,Ck-"q0Ϩ4UˈPj ĺ*eG iCAVq_քW 60c/w0Zٻ\C3}cnGsVZ78!՝K(~t`Yé5 :{aٻ綍$e=SCW;q&TT󔸦H.A9QR߯g!%A K"0tя@ޟa f-*QWfAPI $.Rlȩb2<]|'>3jpnUde?@0A gH2k6Td"^kW`aX2'&)7>m*(R5nP?NM}~xp5Qk璱=⬫ś\FfFRaQQRdD) ,cs/$!52mD#$^޷2̽b+)耷I ߍ!IJaS(Bu[%Kp3"'GQKI DpC@,`Mh6؂!K]ѱXvS^zJ@|K,u;v`ˤZ30H|#+Ub>HC _2M`8UĦW<(l~{Ӫzz<8:`M`BR9 `:lyB8pjV0z&9 S2yv+Q/>YAnywB%:3z,;x'ܕzIJ7*ZSk^o$΂Mn 67BovrԾFpkV l%=}\:s+E* ~e6Uڅy_;fzӅϋšI3ߎ\Uɑ8^Ih[Ӛ?N|8)^-$8q%lp]zHI@{WGGٯ+I Y*jɚ]%hSHrzT>qBlےT6-I1ï5rh@Xygd'śqY$OUJQ[B UJl—HL_;^F}yb6jؐȪi&:t,?"vɃa5)2&E_)2*;ȎS R\؀!8$Jn[Je ZDۻeT<Ė]7ZZ6EU$6)kOa(!1X3* A0 S1$+6B7_u0hxmM`EAp├FvE$s2ʤ$XT9 NXDI$7 n+ Dp N9Y=$0" x(ԣ,R>DmOUq<KN?KNN(@P`[CF8Vyd qzAV:3sGe ܬLԘscޠH-aXi1q Vw|]-jJkt_ >?* Fed;CXpՔ"v6k Eis!<8aJ Bxʁpa/#_Y?(b'U!A\ӧL62  2`ΤSw`2\A9xqFzPk~+[tiC(e3BZX.*l&+X;YyMHaM|71)?߇bj ^6::ZZKƈ®{ZSS 8f?Ofb='ڜEռ&;oWU`ْ-.oq Ϋz+Ƒb<9ϕu_-w/Bm# :Gh#U0rR$xd(I6.c/mSHQI6WFQd9דK]@I,8TfMjf?t EWcհtɿvd~^"a8E9+us8, WwקTae8) $%yWoߤǯNoO_9>DW 쏤$c @آ> Õe o@'>p|^7U)C&ƥh\IYb)]ŘL!U25 D4VhIwS7dHA2XJ4DD; 9$DQ/c{$cP_8\oGn6'22-́uӴ!g%xL[ŭggm/5y֔Ǘbս\<8?LL/.g^2Fq}{Q˫$GyݏJd2Se<Qr$gyC\#ga}G=;,^Ԇ5uz'&6^d6Ihtݘ޶z;Q-bY^4poRy?Tkm$IB3@cmv>yAޢ%n&,^tiQP)RC%Vʈ<'2"Nd~#?_~::]~A-Nl|L~Ey9ʄwUw?Nx6Nj|-1=rT*=у7n{q!E7{wyW{`"s4Hq3-bH,Cty93}ڟ=ڟSTH)QP@$H" 9T6Dl}EjZF IR1@*;[43}T03 Ufْ$Oc`7=}a&wmxCw./l}_ÓC|]X7'V)J 3# 6P-% ŁLubm<XtM d9D$Jdu/*`-BVl7 <=;u1ձWk_u lS5b =+z aFyP[fyẏBp<*̙lv eRC]j:+?TiD '"bϛڽ#n B*h|N'REq )F T GO3I4NpԕJ.0"RrԠbN!FZ&"g#@U&ZmfْP> Kb4! b 2CA}0D"xAD@P*Yb& ZP,[TnK W6'QP gOunU.)_!% D@%kjТȶu!Ed!hfIǾ&,eXjB$3,B=fGRqo?[͏]=jU#>4]1ȓXp-&B FRf=ڶA4sba4*;-cB]lT2ک@'M5Z gG:,#ٰ0}u6%E/_|pgY,֢1䴑4s&mk«V7w~qoaٱ?4ᎂOVFkE.ɸFn9 rLg=nNُ/hhxtQ&,GNZ×4I9?Lfow= Zpob㡫OS&zvv!}3{>;DdA/g^C̈́ls;R6񈩇Ri.փVLzu3z&]Ϥg]w=gLI3z&]Ϥ3]Ryz&Pv=gLI3z&]Ϥt=g{u=gLI3z&]Ϥt=I3z&]Ϥ{ĮgLI3~t=gLI3z&& g!I3z&t=gLI3z&]ddȇ͹+#\BMRI1 > :11@5 %H816^ޕ=*]S[gc5ъH$DD2²01dW2Q|Fp*Mp%銁 jKbQX0z!NjШu 4Ky g"{6Wd'x#ׁ1m>˛ \[֜]NSYBbq1QuХΙm6PTuQ\]h(Gp"eE dO"K\"Qٜk|fx= vIW) ..ԵO0TI1e 5hEHkdHmTz< NHK,y)I2^Gc;k6-l_߀-;?0Qs6&?UDac4%1(KFE[T!thi__8- `E.aKAdKiWD(yrt jΓ:OZ~@B\Lvd}Ng^ x4Ј씨kh($>uBԣ+mǬ\\Kk9SHS^W{ތhDNZkZ$LN `&FgDL٠."cޛͦȾ摃BXyAh}![:'KftYWV`G;Xlju1z9 k Ia策 jr%P*GCNJrzT[S (0hб" _ ӒKlf;M cV}#?U ⣠| cpmk>aYQ 4 &űBGZ z(}u2cpT5epTa ߊ7^`y/S9i$EQ#z)e9cDݺ\^bդĊ}ⴖP԰iY7rWkO>ynL3}FA&0T2#QI у Ms5"d'2 BՍ.!4o35i7OyZl&8P-sE#W\Ro/shYIIοt6 /u'<3``^]XOp_c%8M>n7<Հ?|s㹜 ׻g?5ZIFb ל\]#BLbI G;Q7sztÂO \i K}86Z|ǔ&n;VXk+{|4ø<_c?,dޮsq9=͝{O4/iPфMܻu~l|=Zdph.§S^lI%};Ŧfvسbg 8,r`_{|%DLR2;eVhhltMV'M.'.R9c3ϟX^hic:oK|nA[GJKYU}yX&+ <LjZ 개N. ؃28~7{鍊{gϧgWkO<^S}CiWW-&]e~3@t&qάՄ#4Pߍe{p) eGf R똝%Z՗}r4QkJSl?5TCMNDN2S Lvb^t+x,Iq֢>\.Ѥ2j/=:EG⬵3 FפghZC(YFZ nm7Q+|z/N6z#)-`ݺFW( !:\f]r$C:l|OuP:XِI4`DQPg(a,IBl~5PBy- m) Q RUDt 0*eE@ژ^֢P͆e7ݧ m(@^C@JRmxA*)`&f0X\!(mɻh-*!xi^F:T0g̈$Nj'l8@'<̾n,d,6y H( DkmG-}u%`Sd'rb͓De UY;;3t ~ `Z{"Q>*d9IJb1y]5"%K:`"%X[V ) Su(L"}r.5Y/u"%Y*x^ͦ T/394->-,<^0@XrCq6EʕЪ#@&g\as &+`O9GE=qK79 3#BQR048y{$.e8Sh} ,1N8FzsL@F2j<#"( - ԝHHF:R3pC78d-'(7iiIQ:Wmr$˽幡wZ}NJeTv[ABQOFBQ / gI/!@"`)Š^ :iRt%ۦ烖mr ΃\rI fP9L%$2pAEa2(Fa*mGʋppn,@Ըu)7(#'R"%]R,'&Ҍ;$(" (O {(=V>p&Jm&]߃JP`a/ H-Ρ"aZ+O0>HxM= "Cd"߻8⎃f J C4Xq#1.R,89h`R Scν0j JKAJ Zwv !fq6pg ?+;KYz+٤ct?ăWŠՔtQl'4y4q?ffa'?VD gv9A;n%LJ6"ecGX" 2L9s)(c;:*Ga]]tbjYK O \6tgu!AaHc,NWk3+N >lfJ&R(zab`q"tXd_wzT`?)' g@$&yo>|~~|wݷ0Qgwpe\JOO@=;C}?qh*ЬY ˸)r*טƾd! bG%<M7P/U?7b_AW#@pH: { "s"t'u JkHmlhK^Rodp75:{JRP@F hM3 Zr AFGN'w:EX,g|NZ#{e'x6U˸`<~vfp#G̜kΙ24\ES,* 8w(,<-Va0$Lm;¤2Xm%Z x8epq@n97 #$ !5i Õe o')* /ؓg$ۦ*ah8ĸTaK6)K"2U1&`H%cJAI7{,ռ;sBhxԍY+Eq์#@e"(44€TPSE >A 6H[++fcPo|!s l/ v^., &uD9Ԧ>s6;˜s|Q5n"W_hyU]jcVkAD hʊ^?l{I$-[ʷ|2]dHPj\΁GL29!GmPNTQ0>) fSn6;NAP tYOS0`zI4VV{I *?|$U*c@Q_r$ѻPҪ@G~Q>DTM$:,=%ǥrqcE0Ugp5NKv(+~}_$-V046-j]d\ ;t›W"_ߊ-)'W,kǮvr+hHg&5ۻN~R@n!u1ilYshCru^=,礸U{2 ,;9klݽm{u;O\b;|v|O[knZyeښWq1?<fo^$n7JϘҌHR1 l1*JR>##QJ YF{ oXC1DMbЎ0 > n IM10eӔ"jB ;e XH>92ҡy))@nН҂X$igPo^=A)/me-;tw@%se=IM.[3S|2̌*쀕X+!,%C0b&8WLS'FyrD+'O7g:_\|q'*(6gfBR9B.D#uqdy:$u 6.>ّO\7ξp6NPdRɨu q!D4B9 Xf+ժggqlBQ t7=ɭ9i< Κ#¿P6OU^rEz(9ͧ8B,}, f5#2Ti!$.$-Y *g)a~rѩǻdMTeSgN~Gt@iGם s18Sx~Zc|%t 8*C/<|\⣟>{NےP206*2zIU M襤>Վ>MƝnM)הD֋:XLr `pIK¦KR鴍#ͣ>"Thˈ`+cyq`^A j9i*c(PV[čAH q E-s. 9Ƥ^'L/aAP 7UBfK=cy+(8 >5Fh~u,foV[[+MMU-59y#<`K1G?t~4;?N ?XyB&@iP5 tQB&}_9Jg7#8@-C`WoeA( !mк V-jYnn74YV:~_FVu~\i<Njf5|X'EYk2]}WweVu Jր:O//ϛezNwa۵Uysg: ?LrVnvLrk)Gv\ӹl6$(a'pfğşLY?-yU7CW7VJ&st%(zc+%] 6CW7ƭЕ(ݡ=gtUƙ-ѕ&lBW|A*Jg ]7QWӕtut"@+&u%pu.(Y1U6ҕQW7m)VAP;GAW=L)]p >=>ZAGg<숻v@'WWsRx3S;0>UNS珫X,#PQǺܢ`>OuxBxZ )07@0vw9Rָӏgyrfh!r>aC^P32uJgHņf 1)]dYߺtku},F9or>'&or nیvyY#ps45ODk^c}vm9Lד>?MLUX4"S j|,v:InyFCiOo1w6- pޚ˝Y5\aMq2*%1:fjolu4-L%b:`h1r&~l!_WTKszzuP&rw@*0f5"{-EOЃb4NE5CcNBPmn^ز~<[-ńc=*ڢ9{_2%QД3F3Rg #T㊳NL`?䒿!jh*Oϙly"7w&)呒9*g@¨Ј&IqMiK6)ҬkpAֹ+X2Cs {L|cXFἹʪj6Q'K+cF-x VllI9Hnf[3KiM>l-9l-j2KٷUV4~_0aSI SfXKbk56W\ jg:qdTHn\8U"?Bh]=Bl:/zҪ}QkXmCvf0L,Q vIf Vn)Ƭƀ*2EZcC q^t^D`֙/+&6HLK`L&X@o1ٳ;<쀶 i0 mh,+gCq2HP\P>e(a $k! y%&C8Җʡ%rgZ$l CX 3<Gvc5v%Dh -JD9ܚ` d,Q"1hІ]viĈ\+I3SES |f /A{'{th?bE]nMhN'Mid-8!ce_.v` ߜ+7^lf acGO}`dmf`-$L>|@uPe0}J_` JDkk% ZW!԰˓]@O`}UBWE {0z-4`25P Xe8Uc`e1G T5᫮@&[-5x4;Z 1|>puqU% 95-QXw v<p5/Q$Q`]0r}7Zw } S|L4KV]-VXA8;KK~,EȋU}L!-,Z@J41\nV~@R@"t2{_mO`E_qh"rܧe FX4r03t P@.`OɇUkRvi ZmD,AMzW[7TpeŢH`_P B`︧5{׏zɃ?gqWq& yǑۮbwFU0#gwXFhEbVLyB7 ͖7+H_  ȷ="Ua3-%i;g \I"4 tI 26&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@Ǜ"┷"n;0$(4 tDI Ƕ$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MohG|KI $i3I > i@h5 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@:$Э[Jy$Lh9|(}$1&B>hH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 t9y"^q.ͪjkYt'߼*ROiE9>ϼ㷮./NI˪W*٦=,ֶ؅J6] ZZNB˿6u&hdV3je0)UH 32,B6:cޤnH? W,E6b+ͻiq*uZ#:X+ɝzI(Z@bW3ĕ+DNb*\\Mɻ^/!9]`+T6b%ɏ]: 7.굸B>;9^WVq*uqWsĕZX0rj烿EQz?2[f9R~j)wbw5N(uJLlfЌ)5̕u|pW :X%JJ|z+0\\tzLWRY A&}rF=*0?gT1fsrԨ";ٌھ*靐BT6j)dVn Yb gh 6#\`˵"\کhR Qf+pqłQg+r#Lmւ7ʯr`oz8N.\pjKWҋ9)Ae+<"+P]p5C\B) Xզ*Q\W^ʜfYtqW :Xejk ^+eC%ꩇɝz8n$ƩƮ\ق]Qd+l|6b6\Zq*(!vB+<8S R SW 7XGj?RiR (M̟w2#\`+ \ܩV'+Vid qe3 V%˕&\ZК\W-tFbJd+k\.bYJ,#q`FbtIDq*,q;)2 (lYUbq-ΜXKɝz;qj]QKX2ݡZ'xɝW#N4>NKl+WpkփZcFbd+&\ZoR4mvZp&d+<qr6u\iB\W֩r[֒BwBcAPF)/}U[B<ZS7ҥvd1ob 9aY3Cj1!7VM qeg+vtJv>vd7X[29;P[22$x(QrQ\ɛ?R94)/P|,wꙀqj-+Vd q-2 vz HJW3ĕV`NA Fe+k `|gU20G\Yc>cW,C."wYTW3ĕTrrW$e>b*\ZbF\W^jfOG8S:NTqQ)(3W~ˬBhvr\l8vƩujJLl+_pkփR+lW$׊lpjRt f+)CNƒ1JirTF\WFavJXԏu[.%)KI?dTDر=c}F{)TIZЩBV))) 4{!˵ 4TR-! uaW,3Iru>3rcLhWT"#\`OglpjW f+ht[l:V%X.jBsrW$|U_Z4]W^9k XNu`8D\d=_w(jhvmF˝tjIƮƪHV--%ɪ2 38Zj:XW3ĕ[tB@6bJ:XRWs 7L|t-dw:;a6CSoʢU DLFi-7*.u*,̟quHpNbR+Vlb WFI5[ X3b32; WV)9M\` *\\LU:z38G\9+_+5ٌ]ZkRt38G\rZ+D6\pj}T! f+[ݺ?\`z H{3~p[f= -ܛFY>qC䚉ǩ8 Ʃt`j׬- ،płW,\pjKWrLj>Zy2 F X]IU.#̠3B3y%} I¬e)I ᪇/4L^P$YXҋFg(+ !t}цxw=6&>7|~yZ7sUwW\.UsrÇ\^z|>?^?-R-@9*&^at $_Ƿw72ߟ3r/\=Z;ۋ_b۾| )OÒnNv% 7}/#]?8ӻҧ=52/">%{Iso|6o/uSsΏhՁFVS3(@׶M*FV6ucRk3:|:\=Ka67Ս>䀘 =PQ@NiI޷V`F7֛\e~,fډpsܾeρ}>_QQhu[%S^/~W mrv,7?{X>]=J .GiI&u5sOA=7 +˛鋗Wϫ_E{uwds49Q*\,*|Cu~˫(>߄k*?;p ;RB=ʔg?n.gs;jt K(UJ+zsTѴ(1KmT6!ZtNWwMkRʶ:gQ;#pSi[b'ZhcQz𭣾q(iMdBj0:")B#l&战H}.Ly@_TKBnfۆ5oӋӬ䯿}yoXy;a ܙ;+[=@N6Rg mum'4JKsoCsp|^r+0x#-7[os?g <_z7ewoe).ۋ5& :_>IzZHY[-MO{<[,ý?-Wror:Y;鳹I_R |@9=xWt -}%Xy?_03rEYG|31PMr=x7_N% `##bSZ,T zjHW{mBzjc]5I9vcli`(,y*j6R:0S-j/=:յggm,qqm&= 6%=ۿ<A&_=>wچ.oX~ܾ䖋s[ݮxuX$CM[roFWC]u5 %:afxy^u.cZu-wv:}{Q䎅TZ6}V6@ YԝM@+ 7uњ>iյF]@:*\i imX{ANF\mtu 4ۄ=OMΗ'B ͂aGԾ,:R!.ڔ]\~3 <)~XiC~#p{˷f3zr07m \@m eJ`]wM5mQٶ*RsVsy53D^pt>  RlYYBji 5QZ#}jzDJZ/ rԮ i%F{)wނ Qk5AC)[:QGK9SެVsuZITn@'; !>W=x7w^,W8"m4묯k[j 4cC#Z!v PJLӕѕc/^YM&Ix4c)H~RrK{jDDk'Ƒ\e(;ߘaTێ2mۈDgԶXbU*ooE$ʷEhؑcZжNl!&V:D sKXR^WV`TUOԎ]ksPs^qڄӣzFtI_R]rVF^كH]v[MnL$`MC`v:3k) {;ѝ]TVtԼA4NwVkH@ NֵNn&֖]=";Y/lRxOη^$oٞÂ0jr]46K*ַaOGYL{gקz=Η?>Jw5jP&Jgjɑ_񷝾+F]ooE]L~% 3LK1P:z%K)2d*_(,P]8&pJ!FmOp}W›=5+U<^̝l>zm{¾o~]^/ xP \L &ႩDQ -d#} #VGGO,H2L녺1_md}>M mz.q0":E 0 U-Ajb-7cx?yT(2WH"\"a$UQ 689.(n 'WmCf h+sk  uqE[0R0N m)8֨Tb6τCWUFZ}Wp^C䡤߮%ZPINd|*)z%Qf_9(NyH8 2Z pc@]o858ձ:~-ݧ8c)c  NRNI)e8hFd̫\z3ENtzc1*ikb NP*D,N|Y-nvw\jycm{Xb~=|a9߳nfiͽ~C}R\ċ,W6nɖ8+ckߺM]JZ9E֭&js_m2|[k[غn],[osR>Õ}f!,eQݻݶzipu2z^k}?̧-5Ozy=y˴שd Wά\f366/Ő2NZ WQJuniY׊wt6zց- @!B$ /ĉvoDhw{mg?7 o0["Y/%Ȩ FFΘ,ɈV @i%*$Bt[Jaoyb_Gg8޵nhkBݠ8[4ٗ!"/MJ2/׋ӮN!'R9CN O) eP8n{y/<A0`H6Kx&h<(wDE!#ᨬKNpQ3 dj! 4dGQPL"srewj&YZuե![%;JJDlK4wJv`ese=IMZ3*r|^2‚(J:Ę՚a+raWF撢2;(kjӍga=g<_Ɂ!iiFtJ<-F}LA&2(U y:<[oA,9<ף>ϵu#jbNa %@DCQ%6  6: fq}ã{ކ-<+8d0QE]kvc0Dv♁FW.IB0I&D2F2O Dy F(s1x1]%i΋Ƴ*yxXg:j#2gFkEg'gKvfwIFvߍΧ6=93OY9M8iTdƶe9IOfPq2T>L&ŲЕ=h<{3%f#~HO쑄#Qkq Y|Y"k%:s!ohoWa%28P֚_k:Zқ&yPӤaNz9a澕2yP—'Lv#]m|Fb+jfڰSh)VZBM;Eq#LhF{:.a5lBeٶ8c}.w5).u֙z嚓5MpE@ZҔtRN"@L{㒦]H>AMT'*. ^A˞y+:g-$°Hd2'6KKJ<z,fߺUz~ #el~G00DIr[ ZEp3AeW(Gz^ QH$Zr@TVcRs렁"*RVaDExՕ4*ѽL%Z^˭%:*$ [9AC S.8c):JQ@FHQ$ZbbЍ%oML+8 ^'H`YR-2BT\fB=| ˏˊt$S4NuW 'hPxDt0HHėK@`ODpBLa  -Y = lKiCeH HAtH%E#uHO(g qߛ3AhJ{,I1A@lv%Iu:}M31>ky3㻣_b埕 ?F#%??o p0Gytf'88w&pJ+d]8^($Tl)s8@6<~('`FNr`2^r ~ Bl'oDCL.~ ׸ncR@'UI,3,Sr,K=x=JT-BGG7pݚSzl^U&O|\|\fQFl xǧg˵fm׌#t*jq m$EZGrHmða4Y>#ӆ8V)L'B7c/Ƌmedd$Fm`r5jLŸn|SفF,-1sih25/r.*q*jLUNrȏgg$@!y㟿~p'>}> N>?9|/s6|6 ?=xwzM?shehe\[q%7;_J;J%!]ֱBocFm4ћVkgߪdbDÀEIAtFYIxHo*GcģDg(F t664¥>sxb *U *MМbFY 0]590N;^9w'~NO7=fnrΝIg jpygj{8_Eqnf}_\ЯcdHJ_ IQ)RКJ2g8tWU=U |yP h!na\@?MDG-VR8K<+I^&%W'mġ;^;{ju˜ ذF&Plu!-HGAw?Hkp, ,KVlFcBfk?l6Pn.=Agi>R{~˾HF2{V/GL=FM a|>|S RZJ,J$V͠^uْwEh+% TH2AN(rAuQ rA"ĘND!}"x m۽=Y5Bwb.*ۤjx~q.?_J,ENJseP*Ź097A{#MrP匊lv`xgU*ku gm&0ZH/LE\318gJ; (#1yŵίg|us5>ET>W=',uTKe..WOGd#56M3ƸU^J$/ļMG@%iy&:Rȡԭp( *h/m C՞2'Is.椱SK5GuJ.%%xCmJ } {KHΔe2؊YuΨlP>HmM ږaydUFp sJBɔڬQg(AZ‡1K-QNJX.n f@!AKɱ@0I4Q2z Olz?fic.|7ƍu[ӓgLUhݺ~ :9 |Փ짓je}zW^6ʛwԜWDU&p%E"<fњc/#&[9&e6[Yc`FKeL̜20!zw49 STTqG5͌̌*qac+҆UkhIQ]_6'\Oo(]?}Ąsf7S 43,ODLRUt 2K [qngÀΞpgH\頄W"jf˄MªR&Aܚp\̶vcڱ+kYYh> g?QJϒ30PtyKR3V#cn*HbZY*7`+!E4pML$( DŽ $xpvMn f>y$T iJ?veD0###!IPba6!訌2Yl 162ɬ0L{aYIk(!J 4Q g|' 4;ME`>4}L18yQ{ɼg:Ӓ]yQ4̋ȋG^|P#K%%>8C5 R:>&|1g}Dy>lJ;vC0XMu]Z#J](Ap'~J)xje1" RZ)0˫8n |mI9R@.2g|.*1;?a-Z8&GuJ+4G&2C1K>;6R7Ǔu_:ɺч^u֐q/L,kKYV~+qL&T8*1U)MA{uƄg }B+$˭7![鎥)fgV]&dqv,ٔ.>MA\QNOg]ҝ+?}']!%~?`\j&W qD +;+cQ)~.R:y M0=t1_j~po0?|\&gyZaѾN+Gg?v)c¼)R`IYF߂p_^9c'gKxb[K>ڪ:zSŰk x[T}U_?]$70jQ!w&3h7E_؊ Ww/_r Uˊ̌:cgKI+hg jBGzt$#Zw0+UW vBRҕJ(!e+thn;]!#]=K2(%;DW؊]rBL%Gzteau:KWUvmAD)tR(F {rz`@t8;mq(E=ؑv=UBP!Bv4]+D{\Zbt|)%M YWu] Q }HWX{R]N{};'؋D? ЌGS:VQ8ҺriY_>2}zy̡Ǟ{?Nѝp‡r U`r:#~c~frBo o=/ɇh+,4y_}.l4x_fX)8dL8;q%22WVR ?XNJhB ]!\B[kR#]=C}_B;/+Dk[o]JJՑ!])-$]!`:CWWUt(Ms+Ct9ם++:Ѷ>ZWϑֆٻB3{WV!Gzte Jt0ٻB3{W~ Q.Etŷ=?%FHt8O`?}t\BYWCIy?v=%.+cAPb8B%C*o>p9O T[WԵNhB&VD_*i)3uL)G!^;Iwޯ-(YQ',Iq&jxzcL_{YGnz^Vc1Ti7,KJR(pڤq_AַU%T+Y)o7 nk?40zɽfRrΡ*ژK4r L]ic>c4-?E0oP5̏r\nq)YtR ,״ZJk"CM; lngEƚŦNWDuL My*ɴfЊ=mNNT/o3!'U9R_+vHRgУQ'(dc͇`n/g5iEl(pWRB[QҢy}bMR~39agqU;-D_R;gG4p0ͪGjɩ#&6JI %]1l(>tG#i819Qn!Ǥ6B󱪝P߃<1~}Ro^QB8yr*]Ae=e{3yx0vuƽaFt)\((|A3񞢾EW53 i8Xx?8) A/WCur]OQ[sC?uj2Tz\? n|?t"._rU9tn jqV*%$^ՖЛ^imѲrcf]ݰQix`-2GZ>_pތ7ϴ86,LM@璪_3%l5ƒjq]\N-[4F@[|x`n{ ; ˚yNBMk;jy2 œAwWtt#1xo륻7I*cz9 J`=9wL3ocdL2<:I#Ty+'>Q JfeU:.1[Mz^Og|т~șA[O);c~Y5Q Os) XU"5x$gQW^RG`2"hHŖĵS&zmw^P8k CH]H8VdR;# &Qkg ;j 6C'%%{7ՠZdBBL"(€g֠,qS)0 l-ZaaeF&\JUR %caQ(nD39RYLE Ck\dT j?XGy(e3@fih!\=+AtbE,N)(J892r^j YRH( :j84vՕ/H!j{QQRA}kF Y̼cIk0 !ѿT(`5R{%CࠄfIY"1A! db9WTU/_ȠΌ>0xr#^,KTA BqŘAQRt8!a%G< mgMx[hox|yS ުa}Qwu`mFL`-$ >:gAuP<l:@G6^&*fFVW 2r i(yG a?X¢S ʃJ I"92e(X L1ZxxLPBb2$kuk+w<o 1t|9¢JrC#QhzBb9¶l^ A; I__nwۋ |S?y'kWمX#W`>^b= eDA!Ayh"! 5y]ԅ.q`(#(vPA~,ƒ)^+5Qv5+ڱ!," !PVv%m1E5H#VF/C̓pFL"(يG+^ca?XdY$ fZ&kʐ 6"QGz:Pfg0qQcCe t׈Bj(crf:PF.ot X+,*j$k4Y,p(m@ [SW`^-w;$aS6i` nQ}Aّo5g jŐk9Nm/+Ϸ OB]T$;޴, BttsYhl) l\c F>{ <;Z,FvݚwC58)knĘv @yyG%з^TfMj:j7LJȀ=a]˩4dsC\g'~b!(E%蕲`j?&|ۻep7L~PaAH T7gQ'lTw[~Cq?a!mFJjmЊR 3٭E/?Hζ ^훳Mz~؜~y}~16zcn쿶WxW/_&q4zoP޻MiMP2'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qȢ`v~"'j'n hSZ([N 28 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@Ҙj&''?H#yi@@J tN cB'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qUK39x&'qt;V;$qJ"qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8N zTqٳ.y)z\qC@kþ}y=q c\\1.zP&)nzdy| Y:vڡ)txqNCW טYZvbЕ'eL2]7t\:mPt$uuJmV+ڡSLѕW*'tute֞&+4tFYњի+FiE]$]Em]z}~nnϮe+_w1vNK|ٞ>8v82rDG{vgF1J^ }>6^gkW mS~дMˠbZOZfqbjW]V ̿]9_}EGFrOYjZfn6dUۤhSc4g>*L}X}7:2O۟nݰ12hTÈԒ!1JT2B&cԦD |wS޷8TO z}p"G3ճfGݫg۟/P|gz^vr}cOs{v]ɿƕw>}쮷o>^_wonF` zxM}Ia_=d1/>w_LbILTw0`囹;|TheO; eeS;9fZcµf/0JBW'HWgZ`Gv\;>EWv (BW'HW%Dtygf+FQ(tutG!8]!d|F][gK1xt??b)\;MWW@I6 ]]%Ty*vlfzkWշCWC+kW^: n8: m:zekWrBW_;:A1õ=IptNcu nbqt(:EGD4j{o<6l]!\=a^x,cڒ4ϟ4&UmWMht.M46TUWfiL =>jQ 3#oGy il('IQ)Ѫ! _OF6f"Qh44(dV]2JQxKz%7Lz_=]P&+xe"bi7cbڮ1BW'HW>4|tap R "]y֮nВZ}1(:Az'4 ]1DВ]}1()UO3+zup4,;t8J:_zs|cs|Z$uuʵ] ]}dTtŀnԳЦc]Rk+n}w]?pk]&ʅ뾐$e8'.JM%O; .M#?F鍈6DtCW NCWV_2JN(RDtn+k,thIY]= ]yp{a<:>]#tpdahh>: e 뢫p]z[aLCW6v:+ɸ0]1+,tXK퇡wȉ UUk5޻4Y|+09(dڦDW<<|v֥D!~%D!MvQE!VD B3!`i7TNk+Fi=Er:z'++gj{ɑ_1mNAl6~n|CXYYr$ٳW%yFrjx-fç"IhE|3+ o?)]igjf&Jp|p6"VBW^]!#:\JYS6\k+DN2JqyM3X3vEpM5:wZt]}3telz{ɸ5`qꥎzvh͉U+lݶ+;ձMl :qhrS ]Z)NWR tutQ"uEpOM;tE(@WgIW`ޜ/d\}9<>A j9. 7k|ÍWH`WCZwG(?)15LWCWU -}J( tut6UDW5y-tEh;]J+:C*[%~P@WHWFYRV9 \.k+BևƮΑf޴Y= ը+Dk;]!J@WgHWNsjJCRCWWUCW@Pa+`5=KzPwU;vOKW-,ulgv;VtuLӃV xEt4j+kս+Dٷއ6ZDW=슮.@-tEh9(@WHWR~\~^r8Υ\Z/׽;]3X- R`c>>- Y#R]ʞM\}ğ4 *+S]RBWXPΑˆ KW=.bjNW3HWtb6Yd%a&xMtEM=y"N M]iaHWָP ]\QM+Ba3+0z .WtE({{pɬ3w?yjFD+N +ئGie\Mt-"ZΐevkxЪ2\#U쇯*&pH-G9kY=Q2Ioj($%}1YB֌OB3kZxLw/G}ouƧqq[3wzf׽Z`.<>OGi8+s#u =6~J\?0ηO[ꦿ."o~]G\]!2G Oe1n mueYy6~MIq6jWW .&(bt2L3,aK%Ks֕l(*#ikSOgZaIjy@_7+}mr9kU(^, "Cp e^XJ%/; ٪]PWd8[AE,eo =d|ر.G>8_LP|`[0go-}_lʹz+| | y6LF>eD1#F|MVMV"K#x5: +>I f5ƴg% H^gB ʬ#2HK#)cF ˌ&cSyv>uvf%/>kgƳֱi#[G#/1f8$F^s//cPu̇8>| d.kiȷ>*nlʁ%:M/ϑ/oj;E2grk"_%Ʒr> z^] ^L#YNԸcJtŲn є`8Y.781uTc?eaA7]d ʑDg`s*h1oa|yQDM1wEn?Ń˚N˄]_U0p kr|@:äJ.Xs6cInthұyM!1V @4N;@huY7r%<$BM"$~'B-7FGa> °I5rxsJE΄ ţڌ\(m~9s[j^}nuo֭cO4fY-CzղF:[ vlkh//4QݴY-E}mؚ>o kH/}(5)[.ײAݑ0CD/AݑX0t =Gř):t%Uc0ѧҀdi+>'SZI܋!|8sɩѧv9*ܒU".˂r< W}v75xSuSAPFp2rAEc!iU"$4Nb, f˃T*v󼸿MqW6qS1xPyQl]z]]Fw 7~%9.J1tíkx ȡAapG4y[,zTXx̃^ifo0fyӦx ːheY1ZWYEOB)B(9T05"AY*r!C0m(]wey% i#'Vي7s̻iZ+Musg{ @_vKωmϓϷIs~d9FcJ!FPՙ5,a/#AіqAujgmȔ;Ue ^́dd*RD4"'탌:snɄ2 }RR%u7=A|]`z7gr*Гncjq|Ǜ~iR)+ Zig2 aj_ $r!zptJ;v΃BzzWBPldpѩJd]7 Sgɏzm6.ʞɧת3e:`3#CdyR0&՛ ?d>(%v72f1KBSx,5DkWq_=lŢ+_Fa%`V_U|5a}Ӹ˯yDd3lHKStHj΂1tS%19˂TrH& HĢ:%: wXj2UcBey-[\R,WJ8.2ɹOC`C ?~si T9[#E㲤%JhBjΓڙԲARݮBr,s `) IC+DL*J*VՕG*'ಐ_VVD%Rj :U@!'IJa؎ `\‚;s\k( Ův#&r\",I R) X": QU 1u锺ysGǸ:՚,ܒϕz%҂Qfltϖ?ȸʙeld9ZT+3mC3#ghSb2N"$^V9L3F0(TRNʼd9.ϕ&ִ#ꖔFV;ޣրG vx 9?|V74c [C2>! b+p4ԹjpXSeYQDMw'w^I@3kM6l3r3 h^1Jp<ۣMDn?B)xqx;_cR%#Wܝ8WZzDOOZL48y,C,[g)kf/k<l ×{75$?ybՋxGR=8@~Tb@xQᇲ^?Kl=Wii c> G,0̷~vp!onVW D(n D^,Lzgfp[gUf|o&n٭cV߫N^t7p`H0̦͇v~> >zfW~ e?07j"ߙLGj?^f l;p=wͺ`'Ii8-&Qo&81W܎O& 2eZ?II8|k4,1!D9RʐV4ȕG2@0ϧ {x#<3:h`T>3{^@$k 4 m}#W1l3>g x2Tю5\mpKiĄ<{jmY -B V--a,:r#yT=@3w;SylD"=~  h Ch&mA\Hɸ[Dfsν{ף>§G .)vO V)Ι`qn?t#9ƔhOi$(h}X}ӮG )?hkf-"h"h:]y5xJhdLRF8 pu,tu(k;ЕԂAXhÕѨ+Vw@V Kڕh,thW]+@ Ktj44=9A1B_>N\Z!_ڡ[tEZIt\cVDGCWֱՂu0"DWGHW GDWG]yJBW#u(1Jtut%-M'ow{cs[ÏL ?{'QUb3m,NR;-\b]JG{S!-tQ|ʣ̻|`9sWE3/߾=Brl<+#ȴ,JÝ&-xK{vYļzRy6ϯD^mkt7mbCr$DJ\Ƥ`QXfºJ A"NHy E#<\cǪ%IcH('u<\ ]yvûޟ߾:,}sw];s{h3=ej".l[4ZU,3S-!Gf\QbZ5 !Bpp K:QWg1p, hohh,Kogf7V#o4{Wޚ+RX|j,\ ^Lk3q  Iy(+r-Ѫk!kuPWRst,ߡ llvy6HR f<'le/՛{-:{;@0S‘v+a B^aIOc[d\}$[Cг?[K,mQ\CK$ҤjX::z3,K8IhHL"k=#mׂ!_ ǟ̊v ʜ29ip!^"ߠ4leFM}%O]pb ||w|Np:(ngU5[M{ lj[iHa+i-B k,) e/VQ7hx^#!860(;t2nts=Nl[7AbO4ْj_4oEY/*@x-] X?=AU#uo*ҟsBv(]6аz娤D:`ɲ1!10E,j轡]42Bj缴{+ڹLnu!uX"ihQi0J1}ZiZ4I+7յc M %ton`+IYIADž!Nb, qXY.]50`6H~-.@{;B-}#] \zr{32+7/~,k ˲|--z|]%Vʻ3|U!QB<3Y!s8M<<5%# zMH¥vHvNI?Ղ"4JFtDpC!\zecD4S Xi2 Ht;j4ƨD6cY]+%Ps^qSprnkֆ$je߻Y[5$vܔij᨝r4J0$u$U,U0ݓ3 ZӆUw+#q0agY4bU4c sdgQi/Xʦ1i&jGBR<ڮ]a[IҘQ3HFfm1G|{RUQ4SLm# ڸlZ5iAE *׫Z@e,afX.붜igNc A&dtɊv(*a"NhFY,iZAQ҆J~֖ݫIT-P$͉P4]Lhô%Ẓ [BJmܗnon*+D&a *Ų76Ln X%1kLu/0i妔<潯no?{Jsk3n?RTqY)C;i;B۷V0*{mP_=rYޱ@_.T*'*%RQ H9G s 6N+T.%YYǹד7㭺V8hSekf]:QO~}6=jŅ1;gRNVR9i xιqQQ("ER[k\{׮6ٶPsQ*/%=/ G<<\j2&[;s]d;"h0:7JUVxabS++cˆhgT!=io8%ȗYe7 ݝM3ql$;iS&)Keuf;Ρ(Q6}^ %;\aT6ѱ==ghJtS3N]j[VlJI)<%rs~Re7xt̰CD,"*K #!$/|V/mP +GIm"s$48,9>8w-=R:"u)~Su~<}j&}wf"Hz63H>+n9}'s.,3鿤h'!RcH?q^x+9wHLr[Sns';3o݋Y. 䀕0)ҌE}JYJ8y?d͚#]r> \GZZ/v7fB8iϟ]DD@$Ƹu 1+k@+.S` _ Ř0߄b)#?I41FVS+h05 Ȕ{woPS2ڴ)0beǚiٱe _#D~Ѕ1ZED 13Fx3Q,E%nִfmLgamVXSb,u mVF!?Nm6@AuϢ`L ٓw^='sxq?yx,G8>n' r)Xoupnr3 $6;oyMo<.HA|B Ysv~L J.п~r=I_[*(;W`-6!U؝]RX#@ZG5-׊UZBus=yz`0`,F)L/ܦVMfG.9|qΘOo_N ֓ƌ_[P2}jQA F ]w0{li.X0,mi@v>`IE0593O|,s pJ%Iq1:raKyP˻XVOQ,WbR35QNbi'{QPpOj@\S[<\]Wȏ޾k_üA:Ǵ8ր9]@ifGT1K[L:ykŮF 8 s%Vh=Zm;0a/ %#[?Nr"mE@"BZA2%b!MT8Sm0C/im`]-G8OS.HBlAu"c0&LacX@,X7l)t)?WF'b_!cz:> D^IB9+Rj;AFqDMG%}UnN/`Gd)>}Se>ӂ;'EM 8.&[9wg}܄JUcEDZ؛2Y5nEFKp{M"IZ,7ҮNIZV:ʴa$u_/W7߷›G|4%kL,U&H޾F" TE/7Rpu噳S̯ ԀHc 7K7Mq%v $@ګ XS]zWF]^r+aG|&#yp#HY&|A52ܼcsv{Kaq?sB!7ccWc߄l @.hz / 0%`Zm$Q:q慒Vz3J`"i^I&bp̴(Cl;h`k iLo~L}| _ϵP ][Bs>0!?*<> R"-+qMevGQҀbӃbƓx֓)ѧt9nZ?EL&Ν[B0"xOZT1|-<` hAh)Kl, $.>\O~(a,xk'0N9!%1,!:]hbbd*g8 5E08QM>םp|uEVc^2 YlC(Z#VFcD"BK/܈-rrI L3KJ:ʄ1 :U:ON !ђjfAe&cƣK])D"kg4 'Ռ9#,bK` Un&CX#,"zI%IPA<1ijZZz~v[Ҽi Si6ya{N; /V-[Tڲ%]k͕n6K]-p2WTһWw[~$9${) ᶳYvl殴T"_-nȑqoxFDmU%pm>{nXGނ)qpn ټ2ޟts~\l*{[߸^[Ubr*-םD5s!͵ۯ76c|زQ1md]MNԼR9286_^^y*q5K߽pڎ߸ҎL)} q_`i=}:cz/gU ?)S; 4+Wϸ?Uylo$@:7-]^ j k~Wbyn-{WW'/Kr}ReUBx#NR?|' Gh l4 ͖sۗ$ 3eVU[zڄr uix9<5zN.]"1CJqɨ>&lW BqN>,xm#{vcGYƏÁ7zi|@AȦ6"V|G)s/XzaS2jxb&N2Gsqp7} /01dRO.yL`Y#oc DŽ>8J˺*Wʭvϭ[f#O$sFX~4PI+b478sJ^X\v*. ;# ܳ]yc|nйѽp~0dSY@TNM˭tc8w!zBizӛ?=(ŊM/;ENul.xw9rP͎8jGfݕ*5b ˭<RZGyjQ}Cuɒd)%iz$%ڟآ8\/2Ӫ\DL.bV)&vJ&sIڊNEc2I __t^8_.4븑QSAwI|]̖^L{E*9ҘEw,+X{$v.'A:F6&9 䡖MWhs.o4 fi)j#L6H g:$\P9u(}0FZ9+ϰ: z!'-QdEg@ϵcCE29CwAc aZI)8 >0 *bߠM sK; xR钡x@:I06N5>;tT׀O/ivϙ|Kj:F4d?byc T V*8EN%p`ZĂv`O^+lZChgN4֌.B6h+xI5MǏ\O-Bж p E0r>_.2d? :5~}āh=iM\)ῗysdb9oo}E<֋hm>M,.1q"7qf@8Da+H2CJ(c Y @ed X'`ae n/*ta(p_)<0P0fđ,ǚ{U3` %6-jvD6_aubiP=}p1/²{ZIƒoxTvEYa0Tر}(xA8U| OJ #Rʋ2z'hأA^6qáʶ:Ҳ ZEp6MG+Qb8<&I(r `LX8RA!gOd<0G*G:J 'gA P[ɨJ!i`0 i@yKcjX&I, xBNdK:J&5bI|MuS]iuAS}&Js!? ՎR,DE5Bu%>f%J;qU| o2`3vm̲56YS2~_D[|$ӗZyWsmE -w wRK?M˴Ʀh׮+[+܋\0Um-Ȥ8p!ȷlVRiC9gs? g9ά<]" L8C m\^5Csʂ?Fi2(kI/h7ФSKeo ^LG 9"H+hZ)^"d5t9z(+d|2R_jdHh7Z< :/N14 Ee p?2o8b^;i=*>a55q.n>U ឱ&3Vsh~i9.u;hCcCĥjk  fN'M(] X'x*Nwzl$Bd'~,$Wi|ljAHtᣕRn$1EH;jEF֤*#.#bCB ~X aN[lÒ:qZY1 'v:b:/YK`92L n!$"$ oSu~79e@\"4$GdK)UԪbhPRҾS#hUK螨JE #b[ـ,8WO1$X\1Z.r3&fƫ d:~/Ȅ zfD {%I?rE-?dRy=?E/FI:Ax!+t }&SJdzWE㽌, 9?XL3-zy3:: ۼ`@*v29,0!G_AmI;,l]u6R2% M2[dd`]. eZ%{s2x2 a~ (LX(ddh bu>A IY!FGWJ {E/կ?g~F1NK@lE bU=G#T}-JźFߍ iPxÉ̔J$9kYH@"qr'%˜l>„A8pSgbo1g;|b*g?`6yf%Vͦ>+0-|]4)9ԻwFK[3tmFFIyˆluMȀ¦8(3́]! EffPE ]ҟ.$8S1({oFJ*S"1TM!ئpAu(hwSd2K8>Ä8l>v#seN&(]Jo'3RM^V2+'Q!#-˝١[$f\vɻc hq@l g^ʆx+bq4z_`a*1PisHx$BՕ{7b b FcbSAF$j.zGW%3g\ʙQ['ZiaktLteVnp#Aj7C+BZfU!^2q.hPb+'uEiPk8mH[Z$sT1GO\$*+zV{1 N^7 J7Å,ʠmXDž+aDtJzO8m}|K`̕mbMLWbkYU3\}]szӮ*Գl3\b['qؕ;ISl1S~Ǚp Ch 8 :D;lqΙaVGT2ԦQG IDiph~b Y(ldfUv<|On4;ahYkp*gП֎.i_y=17Bp!f2U15pI(b䦻.#p؊9zJ: IO5&[CuRЎQ շ!&ġ{M#_Vk,.L%MCtȝlћhPt(+@T܀Kβf!;6C% 95I o ;:Ʒ}Ag݆nCrVgh$;_z3PgD V1" U#@gcEt$`Pb^W #4eҷ 9nL~HR*vLHH0%,I$PrњQ`:K: uIC#vZhڥĬ"oᇏY*MEc#ё Mnr(}m(SDeπ<, r5+jNK _108†tθ {흳Uۻ>ShnG^@ kpۤ9%J(onɻM?1[E+5j0;jQYk4`!8^'PD]w*%ա,yw~ 1 wjlz{}\,(`PPw+qVcX)eVcT`,xY/b/S] ^x?{fd)r}Qo|(.G%ꈘdi@`.sRS.D5`yzZQ/.%$;*wͅ2~+d=O*N/CY?* ̞ BJƫt./8R%K+jɸۻ"OƢϒ/.'ƺcA#b96J؁yQd`@4J ٮ^E_(&|R/`T"ц.*ThX 99SbIA;Vr)9^AG폷ZFX,GsLltH}{W?fhx}8)>ӯWm;̒"%YA>3Bpx;rRpI5^32%_7pn0Oǫ/20ӥ 0h`ǾAAuAr *|x1 *;tXY yݚ"xfP) !@ LC :-|> R\h$22Ċ$jt:9MYkf[yOՏϫѫ|˔j~5gńNR-TWb!RӼG.sżm?ʗֱyjeD}_tf?mf\U:X";`JOF g7PQ@ FhO*C8yMƊ?G.fS֗8wBV~o}Z]ըM\.<]]ɋ[uq*wI$ D $'|HAO, YZ*xYDoCٹ4/ goԛ߂wbWx{_g2'ā̭6,Y\|. P˅AmzSQ-:-S)9̞ni9v[)?Wv+:yǞ nquU孫U^iҬ7Y8Q431IH;ƾmQ V*uh#*KzGUb̖8r}SRbYR(H\َS#{p_{=;ћ_ۗTvu?zdecC efwt ٝ┗jU !Bud0v&̠DeHNwE/'7PA nz Hu| ԠJTv Tcr<A飘I.Hi&T\1T`&V|(Ul?ITM3?-[9 rO[~Y;Z=Cf2dWWH:ijncVn# 1nػPZ0v YqWZ^/LqaTۦ8d~7ni"̟o+>A ICEy3/?ҫs$h<ފ&0Yft^Z7pg)BH+ID((,uҒ8kkn#l`6T}HRɓ[!M$-J&-&ՒKMsù,I(1+o:)D*``Ar{/c(]^ B/k䔯K5KJ!˕Ϧ=FPqR$(6))!mJշuQ#L\FA5ݮ3VVۺk{5imq^` qR2,KeUطY'3PcPGQhUw:zX牆dy:r\7Rt_H%nO3zƺ>~]NXbŅhaKOZq1_lc9|Ҍ0MsT/{ׄ5<7:oLtP2 9=5&QɟtPiy{"n^15.Xkkg\Np'PEf0 4L u5ho떭"ݎ/j4n֠Jv@^\~d\ R_czY˨9g,l2.Ap"r"4?-790gnr'nj(FvρE:FqX)azΎO6ῳ_ʠ9[߶N"8c.Ulxv`OgY2kO)bhd(OS&%4VO#3ӡq.`wqt;tw76O1$˟~ z *v*k8]6Gϥe)R EhT KS:|w! Oz sŐ)DlDZx_zgFG#=Kl~ 3 DO$zXsWC5}pHphZ$LjY6,u&TbQ'ћ.~|;/fqJ@6G<[y1|<h7\'/7@=aYi13A.M?Q:WDQn3 I )]VJ5yVEbgp+YP)0I4T!X,xs "y"[ +8u6a1yVp4_zfugEڵϲo=~~ ٗu t[/tFZ6|o0^) aMV̀ܗt^Sgo.GzNXZ2EFaR~LQ{oEp0"b%+,F*.C>H?ϽX={Vg9Z'7M_VJ?7|1S`zg.dGokʚ L,|[rN: CmG}xt8a 0KNjWԭmƐ)i;xkTC83dq$c x+&y 9.fsl ɘ (}O.|#3&7fvr-07;"g\`Sk6ÅRM`7[lSbSfeY myŔ*Z!1X$&MD`Y{YmM1P]_NrlD' DE&AgTLx[*ԌDJ3bOo~zrG 3B@U "-N g_q"=hk0!QG4dH@ߧ Kˀx~r9bHƼVA=E'G)\5*[W~u[S|q [)Z-6nmk#Ij|H+\[2/ݥ:D6dk(zЄyud -7~{dU +Nhb9Jk27mG;VS»Z`/X.M{pe*ڀQʔh.T8`MU尚T v.Gsp,+WN͢7:22Î#YRrMx!Hy4`F1Mo9A%T=:X`LXU`E`7#;xAQIa"ڻLxB.3i4a fA;ЩʈL[BIsťgiV堍!Y1U6x67-r@D޷ɼU奮~,;)IZQn+V45hlg8M`ebccA-,x\biW<4zI|9wwRb,NQ Ulg % E&uưrS#g|A2J4Zq=.}UbBc F f1FZJҤ~EGoZx[^RI= cbG"zkȡ\rX>8-X<>KUn+JE3+g Pd} z-F%4pb9HoƓ`N4܏!"RTE.F6~?ăeK8rQgaTe:!&8lM ^;.£;7yFrB׭yʰ4>EgqAx"1-NvG.+$1?CL_}6p=dSKkF뤢sҶϛlT ߳ş,7pҍy 'n,")M屿LA7՗wI%T٣*UF*`psFTH*:PB5H_l9$W@;IaT"YC)d,ã*0݊ &hQvR}DnF::i 'ZC!#bG:iZu wq /ۖC\F6RU"y@+xY!H{v%˰F;2<%ƅ ܠ%+9K3eC<ߧpRuWJP*jZX~@1(B D6xkOhf;#QPC{z-hF8S5 Ef*+[1NN0U?4Gzh~Q zfR5^B\Oוk;!R]t){0*KpƻobKhؐbQ#pcv:pTIePQn͠Χ!X5{ (b5̩J"*|%X)HiS$3|-!iDߟ-uV"QGϢmf>VE*_Jx.<  ]V;7lu4ճʆRpzw DH^" i%(TmZQeKVJզw.t2/PdSi niJht)@Ļ~E9s xU*rT7_H%yVB*qD"PDYˆLr c։5YX$r{Ds #i ~):_Q3@&Q$MD 5EoB 6d/&Z9NʱsXeOF;=؅w+*&YhT'Ŕ_o}]D], `;B>3 qLƇf "-N  r/K<Md#.NΡFdD{wBsU1BŲzv>7OO#]:%4.]WT9*s3)\Ŋ,I1bp"&6XPE%C,GR1^F2 CDڵb_}kL+ `,vV1Fd)o+"bН|⢗FWM`Q/Su8TN{#?.f.碱  oT{LÁ+u3p秧Cvg' DE G3 *X&9.q`/OJ4ċa 0$J4Ta0["OlíBY!oKN@љ Co:x8!w8 .| |y|O6a4b7ߧ7pϽIk.\l6,4HBJ0]C_?F+3(\`Kcfd$5Twb?ozZ}pmv}jN}/_zGV6ԯ^-&b_~{oK/ @a#z  wem#I04%}8fލyP. Z51}x $@(I+ʬ< f`xl S_,7o\ͥ}ʏPZtl!ͥ#L 'N&I|Bpu}8u8 jHF(7uIZc(J1Vcׯ|ΙNq_s dIl36I?z^ G,-~B[),Y= WR)t'9PVƷ4X^64ǝ`(uv娾`a;/7äFsUA",,iO\Mk4!X4:jkO&diFE 1@c)IX(Ҳ:dW9'hˤC*|ʋ75?yfZCyY4mNhʿZС%,xzNw8)vu\('^) gL@5ac峇M&sj@f:,i 46DDaçi%5+\Cׇpu@ uKEsuڵ>B`sl $uZf(.|ީ\ ] ˂k&tl7wh06 qXT/e3GdAbs$Yj[SA† ?,Y.O8H!oe [nIL(a1EUӺ.#pV6 V,\fTڌ"# 5`gBG254&hk15G}n[ q+Lou+>$͜u8d̽LR6J*OyI"$Ur(khgN%-3nxo~ ^W}bm|uy@ Ea)lޱ=ӹx܂'J닽c_9}>D1ŴČ h"18RןSH rd_% mħr m$Z`M˶i6YMY S~y 鴁G^R6ʫg)_M˗]A$1JwElRE[HQݩO@ن`d VP^wZЄP~:E$^Ü*ݔ rr VKZ{Z٪vIh@ OݏumהX-fipIgo(άb\X ɟ JiʲE|=is cqՎb|Y~>17튩 fjE=H("3CR,hY_CsE(H*fуy*u=ԛH'e5*`|gM0C`}/-رO[^>4.G *3{d{c6dݕ,t3W uDgнcfD+dfNQLFXAo"ւ9_fŖSɅ bOь4ԱZV,j/y] 5)C@Aaz7yl/a-]DWhl/ޅӢ0>58]%\EgR>f,(`DYhȁ(XikNUXI3h*[q_zSu%f 6f]6إw,yU1Xu~Qcr,0KO+PK : \^Xe^.Lk~SJFAWJFSSZ00} AԮmtyh6VKU =քYt+[Vrcݘ/5bW^+hs`<C65tƙOƈ<])PΊ,{%*V_uݎ[&ܜrqI5n=/  i<gI  kpn|%߮In۩1Ue=0m_q!l'j;C[EZN$Aí7N с}ݧ\әWxyrMpi4>:SaTo vkˀT3 =ff}60Vu}_ӐÆYnA`I*./Ίk'Swrmi~M+W ~&q!օ1q.xꊾMھzx,0[}Y.}j~.xY~b0]=~K[>`nHv^J4ff,^{ } 0~<{[M?U\oݒېbZ/)\ɬ/08ynNeګxp:ӯ, >)F}/ RK9Hh0Mû|6gEyf:nŢyfD;q Fe*!-MR /f##n4dDDC@/rL8؊΃7J3S8鼢3s,3Sz$k|$JE3Ѧe40@}>ҧV}c^6cg9G 0BqD}wŎr]uԭP-q "Q*V$P b"$)I Kgi5>V\,dA$dPX{߲mD(ǎ7xJJ%%V%m o(y8Z$a0#V9|ת7ZL }$d7Z#6.)a1>G2P} _=@I,115K3&R*PlZ%$eQi4R4v[2tqTmyR'+nD#!%>j߀eDiSuÖ3&Dw+ \iQĹ~MgYv:͆p8W'xz7>@+ >f  X8U>JΙ\;-O"`NY礪wP]U KK }>lXfF6]("'^XHLX)ozw܇et/f27@t?qML.n:~%:Ep8vHE_]a%ۖH0hG(orvcب C;< ,Xkؒ>G;U[Qdxkx TKnG173O8Frab և,$o9/,wX-d ’TAc M(]pΊ&8у-Ȫ).x#4#Ђ>b4HgӊIN`k^5/tfx_Ǔ|o?$%ou34x\%h <5+H oe!K-G9 mJ%יF4J21h zЇPya5[$E80g}L"J c(aVH -D Geuk)/-eTkh@F`E' j8VpE)g 8*#[+sJ-%$moL1у^kËh^q-'Ê˾?f+bCaWPTFlaRrNz>J]4tm,]$2FI’`ik7"JFE՞炃Ma~RAINco=xC , B_Q׽`~d2FR`NA! ȵ *XZ}NQt|VnrFd0E1](gnF{24?4VP8ZB/c0oc~y>qєPP,38(<Ϩ>dd1#H,e'Ff*e$WV[ŵ`ab`c~C'_^ 01bNY>g\̰ #X1-[7o sUs"' ޡRsQmiA4$(b&2FD8@Ew 򮴒^bȇD p&&мKfٰp8hwr<<~ 7XEzUD?*k}`VB^Eg}1AdI,cXJ%Ɯ{m,pc<Ѷ}z+C ϔ6Bs 5܅L~*Ͻ{PǧVӆzډ13!s;$u)p\u*u.bp_H- 4}b_XısOYR'%B<DHSo>I).xMM>mP`T߳](Cg6n}j^y7(6` L*_& BVIᕞz 9*WRWär>0 SRow/$C3dQ\y¤բPTcf2ě^!Q z6xN;\cہg@h6<=-wnycnS( an @oBu\h*I&/犋ݍZCQ Ptˆ_b}QH.Y<[?֧#+۽2Vw:waɒ"ҏx}ICiș!eyLqz^z}v.?Sv4S3mWvjryGrn2m3 2:{@=bz2pyV.yȌ1GT8㠄pe7ogq8ۉ|>3GavuӣS789LTm b돗XÒQ68@z6sM?䙝s$yY^Y=63(.Gbqaߐ50Άգ".#a""fYDUsj&֮v9cD;e?rVh%Q[7 Stx\y E})}4[&n .sLΉ,@3A˾N{+~F3 yJ'8Q)\%*E47TLW6r%h2ޑb)mӗpYS&L= ]PnA0n2uAK,ʟS(VIt6Ygʯ_->K%r7*k !T2{,h$\ai͞tkըCVtȃR6^[(~em #j7"m8jbw8Ҍ6Ŀ|'YDJE`,jt_ն}2.B ۬ mϭ,1/N7v7k~M?BQew29tL!uj+րD+iTAD۟7oSC2f{oTC^zv=Lxe"Uг 7u3X[m~Ki)m֗JZ6Zk[nTnGG6^k-eIc*~lR149ZI \[Vm{w&jIQU/ŻT8CZ@4!씖NY,W Ͷ4B/*Y1p(>~,HiQ.`2ݘd1FA.mmi}zqߚaPY8yCm=2.l~ 3*FkS=H@qf7m}|dR?JPR.x<^} *mLU_if qTŭ2jC(krsnmY Fw] ꢷ엻kÒD . ֮r.ķ7]K@mmc*Ԧj]b>.h8XC2F7]>#m Z2GV߅4 @Fa.n{=WY5OڹG/%HKp!β LwgڜɂJ9WfxIfdfBE9YKx.Pth#Bzu]|wV|kV .u8CQ5kB 8mT hhnho_X+7V4\CP:C ~ke{.znzn6%V._2>re3BjFkfWw6D+52 fW2+i(Po<^02vXӼ\ˤ1\k =H|#Gv &eG [49 }V~dF/b%wz>.;@kو)I:8N-"Yw Ii̾u`FHP{f ؤx;"Tus5@ %>7/M~aج;),|?3UE5|;#z&!C5pz܉1-*pQc\"Z2]݄4< D;\|}$N #eG8qW_Y֒ZƧ2\H<ƖIO1 ]!#PŔUx²=y5O ۍ>8yC@L!;GĐDx&$=-ޝ/cVL޲[V"'_xO cT Of= ##O:74,s(~L3wfTH?N/?ܓGDc$ N~ӛI)m%")Lv僓O8EGp`9N'6O4u<*/f|<'n|dD~:PE#G(+-1LJ V9}mNuXv5*YoSIQAIǠ.YHFKC-mcظIB?ڳ uڧrހѷmcAQ{ydwAҙXcDR@Mj1^ow-傽qyB;heKin- P)v{qhܐ֖\{ro7G7$3}_e&=F"G~ZZ=Bܺw˸1_;(>pɰI MډYĞ5˙qcdo_ăDo'SG,%?dWv\n7NL1Qz'}| ~jt;3z HA) nbb|kWh ZwܰƂX6k5Wҍ%w?r8. ^|l6$Z,V^^*Z k\K IŢ!m0&IskYT^3bs|e ʁ3-Ն5 4wG7AZ;6+m24NY*9 -JD J:M(*5(S UXэ 9Пjr jV)-<{LD˓GrT!B\$ \p%E!y r.J$W!ٸ15Vz q}!.kH/|W!.A;JF]dfl5U}jZQ^ۏ?nv5`wx<G3m-#t Pfo,$M`e<_C5/}M}b!f[#\fgԒjn$ ,LDZ!TN9(I&pB(%IL#/3n$\ n$<Ѫc.s6Ą XQI-I(g˸bLDRQό&o[sg}"LWIdYTnfQ{_ȗ+\ֵXѻi(!r }*pYRokQ,R=Y3jE<d%'IFXP.R+D Ø9yR N"Nrn_Th*!2YN⡅xɴͤLc('RJ XS@ R&Y+cV.KRGJiILLTɹ5*fsN*f/cJP6uhKV חͥJO&a5d@UB`HIHIw\H Y$mB$W҃#^2 TLj1bT)P>_!@z<:J_(bɡ8) T2aA+`-מ+ f N"օT#o?Ng71351o@N&c[iVhiC.D] UUTh gѬ DRIO3F%r@5mĈU 42)蜧?DkÎ4;1yRRI1E9YKx."}9C'@SИO9>mrj`f2uӏCPΈ3pfq5!ic?a;aq7]^4 !V_d0T"W|%$2<:3$Pqڗ_cВH`Y.'#Cxxc~lUg Ey]>:| u]3"%uΡ fɣd3G kfNNI6'6!{a dQ9z"0[_,!ɏgXNPiP0T=}}P)tu_mn =ɶ/Epct?̯!WbC,/T pe %[i뉹ˤ-l 1 HyAؓe!G>*KG&V7UІO$!%~\re2Qr iC*8XT6@Bn%(E0Q{^N&։h{Dù%'ÕҟV!d79"IRb0\ uu=^,UR.23Ŏg6])άɮxGjxԍ9V"#`MLvZX 8,9^jb3gnȇBh1.SߟU'k AW{OGܯzpȌ0O"O!lF6bd5I% @[SQQٝwLv]g~J*;BQ#^$lqk⸦*޷dFZ 7rαIu4aWJf8eU1#Ԋsêhs.Ym۔kpL أф(LN fL%_TN0V Pu6dL ;zGpyUVF+ ʟ7V+C lgƍO/D.5|=Ngj>_&C~jDh&|!\1Yq|bԺ1sy̨߳ !vc<^l y4lJb3/3evU Pj_{p4ImON6?ΫPj%+ wJt=*" Ub*HlsR]3IʬB$"%#"xt"YWv?BM e;1g;'v=jƞ# Vwy%F>p*cyAOQ,-+塅]{k@ [}.Of\% 2ZeJk`jˎ(싎_w5a}!V̗Ҋ0oS+-(mI 7bNk䛛Z1 0ޡD9r,rZxI[0-uX:W D- SC",I$<[3`QYn09;FX3^UظqNwF3Ko8;NkoZZ(EBQOB/Rm Ϟ&4زGkrAg4.xLZ(Ik V] oo«=8iH#w}!R[-wC9qr.S_fۘ)Js$.Za.7art.x~z@둹{0R93q^dgピwoj&VW0%:^cGh _J'dWg9GњM 4b 'RRC6ۆo0hj &׌\9alZSTDB)8KZ" /YKX6gN|t4nwui o1Y_l Eۋ??yq/ܙyCYǻ5ؙK}|7g]1_EzJ/_ήko;;? ߧK!B]faɵ 6#3TI4ϩAÕ&4_3Cf(~7+ s/+shl"ls~80to1vщ (9E*" P EP 95l^øQ9F;,fƊxّI}=K{ܞ̃?~6hzt`w@H=;rur3Kl>FɋX-nNRX)8=zRW ^ Җ>(?wĪM{g_c y=um|gviSf)?Ivf? ڎt鬻u!׾0R!ܒ?|p#+fcڄW=yk3rܧ^ݧQe<<w u>Imv½l_X5tX.]|7 o{iE&mWOMy:}~[nXZ.wu4 s:V < [QmO{ !?⭚;pѶ'cL҄sRxfUSaFnd '*9ET;̦Vn~=m[+HE¾"#4mѢUB-]33@6w{ZrNBD~HT(9,R9vVhVϯ.Mfq頻uivcop=24!s!PjJ XJ]W6ev5fU"io!ձƙ|&8|;p[6O/Zt1me&;A&fkp k\ ~6:@Ã+E}ܽ_b#(佧^hoݖ! 4Pzi[bnJtlyOI53d`VJq#~~z9 h=̎Oe-բ7n4yVKqvc\7L{4٢A{X Eodb £'1'?a7цN%#fڣg6K^ (j 3@Jx/:2맹/NqD$>?-mP{s 5 r{1ϭGulv$HCZ J 狏0[Vt"kE/ ŧ';M>{KG Ϸv,8vQ!cM3kkfo{?BiPoC 㦶ߩw|=%ۂsKH'Cj#֋>Dtz V/}){#~IZȀs*|0cH[$zb=}#9.*HY2] Ecjd+Q5ɤU[$ 4H$ t 29u߻dgP2E$o3kBSGdrJfhhJҹوe5$+Uh=> 7۔t{楱{'D͈cFpS'2Ʒ5|b%3 ^;a8z*Nz믿>&OQfa=,w]Sjĵ(G/Os5 8go] o׆52o AͤS+]f"~8TݕƵu\5D[),?}mޯ|S>v>y9.wޯ~y/`JKcuMkh=#=}xah3 ^ٳx,hJֆF!vTL4ET*:^1EqQ3CϙZ+HWTXP{u)D Ob1UVfS1fWI0(tCJlQ^Z$`'+ 1 )F\2Ie+6Jݰ%dmB5a͒CpR9=w&o2τk|v0f'QD6* Ff`++Q#j۔QuO2u 3ta62TQ9daVKz1 85hI)fSDdiXkb$3^5ĚI'}Y1o+X@uUxhV⭢ouF(! E^u޶U5XE\gGtdZ%" telŽE*eXYûTBT%TC`hm(KZ%i3&aWQ ᭶@-HZFEBj)5H(H-H$)"iu*Nj5YY%#O?QVkӹ|H7h&> *bV=q#'(0?~$BؐTn3cd Vh}(eR4Gd#^!ȓ.tN"#9BQ*'l5eDb:jI!:R‹g Ց20ɷVmmȬܖdݓϑ< ]' %rDh C JZ ԅҐ*'L$/BU65 y̒>qyGÊcww*yrz|<<P7R:ǧ7>8n/[bO~lIN&Ԍd|kIK:MBwE3$\TB B"+GNLo=BLGJ)q[;4Ώ5-ıY|pPήF[ 9)]lɿߜ/gg閬//:ݭ!^1EfAN~i;B.xЎFt2&J S*!AS[g%()2i'Sr"h5 Y_` GAdNZrpC6eXJ`V%ூ5ڞH uȊID\$)i" J䝼 7jʴYY|^c$={[!g+KgWdg%X0њLe.-3hlyb/VbI? 3g鯅E=lŨ%x K71Y2QIT< 4cUX4+Dgxvglb N]b4e2f̞-p}r] G{ϫhEu\- ɍ ek`5vFϔpauHcF}6˳|* K'l/P=8h;E 80+:N;B9jkށtRV˷^;VS>ZԒ:Zm}+ò=N+*w@צ[}x0&멵7fjg_4 rE|A5*AWU_ԗ"-Tf17z:7̇w *k"`aSɨOyѸm*v9)&T 6`hr3j{ X_W~7{Mc:b',rpw3ύb(Jm~´. 5x VtH/tR+,roHe^.7lZh;KvcOifZjN:}D6񄊓5Q{KИȗK(bA eHv^Qr35N^D)q٧ϗ]vYz!^tρfJsvfD3vUvm{=vzw2B=' .^wiUV~^̬m.>vNPbh @_!;u6j|WϪH@ %-8m&q{2?h!QpAⴵ\o9wVو|3F; >BNLz>/L.&Z-jjY71`g]mŜ)[ 1׊EKJإ?2{z`jt=poڪ|{MѦ8r7|:{98Pտ,8`Ie*ٍ4Z!ya7fJsPl=>Zp D!0  [-pAw-fUͳӝ`@qHz].yQR-zj^ihdpԚU^3B?SZtYPsfܷAΉ p<˽Q& .t .M'Lp]"oFd?pM*o]8z凅Ǡ9L2{/;GcgcM«+UhX}6bezl=6@rnL~WFor^ [^+XO jjA a] DSAkdFg:1)NB fc4P8U%`dKG/K "OTD!Nb9'GBxbLm^dLi-RRRAWNiB]ˎ/*hMFgyN'Zpⵒ'>츔PrS:¬n|BQLDPr&e)JH2[KGu/_[סpRK/+4;<c,BL˒Yp O$r<-ySir;n@Ze>;NVhCZJvvcf3%GHQPkR7*ug5 5N$SfIq4p%eWzv#RCU*I!/g粒 w0rDd|0fBu>5hTIew.MKN U ]}=\r900KҸMN5狋dLIr-C9|eR$Q+Y1VraE3e|'i>3c_{`+[}fFXYP/f_pGV)1E tO8~p'NttIIy(嘊LFh.gC!VD!3bE$|bחJ Niy_=]'N ؚo=W |vsm6@Aڲoo9U2ڨi-;Ga9X} {{1;}`衪6DY3UPwo7ofUz1=fLt0Ms:b[ Gw:ay:Jm}oy c J(` cܞq3cYKj?pt$gAg.2tBEfσ5O󠅷EPtSH̒cY/x2J;A?EoEǞ #,1,Y9ͅ4ƨ\J. Db) Ršצ 9 ϟlp@_UT똶H (^!6\•.Y;$* m$r4^!JsG+%+p$Z$G(EȪ2pzwE䘠@*g($<%n;MsdCap0y< M 3H&>ǃ yl2êwO&ÝXjϪF֦|`[ gW  jG5%*grᡨ&f4BVg`o:q|KW\HX}*,|J_;fqFd&ӯIN!Up'~g| Z#)t9:X-Xcs&Ot(2})TleWd^zI) 2K.'02NUH[ Lw/#9/o+qg fqއ>U$8/ԯ-7}.oX(n^3CA4%ځɑWcV1^()<ͨtZgs<Y\9UxB-7٦@FAJyI.603 ̟l&q*Su ;Dv A tp=:`>]&Uk pe Sb>!T$iz1A ϽɃ RJU9SI/=Ehd,92et0G6s]Dr-k$azʑ_ &ԅl q30h4x;;I'~-)ґ#HNt-X,~_U;2V YȄM}hME]xgP֛a&e/J)@Ǧgb(`tNq+& "Z˘=[ܬl%SOaQ/usˈ{î/ 1SCX.w| k^;TڡU˵CUcmU6T/#_S~"K lTP]WpgUD>FO[5'"T[M,VHJd9YiP έepOH ؐ=qll)HK.Rkiw&!DY:m?bJ+W~J C(E*V%*[HVe"gN:閚!"m"L~6P;1 BP Vr2k DVKy"Cd1*~JPKKa;Zi‘Sk"6ڬqf# 6,<~01[A{:۱J`l4]U| rhFJq",9$*X1(Nb?I\cUQ9خkA9N(u) 徑M۔do^{rC6n҅&4N%[٥ܒy>S0F fʌA2#)3kςf.~ݳ k0Ô1eF ڷ^uT' ^F-ÐF0yS'oϳ>^q:0[QК A7em&mPyd =lkcEm(jj=za8vCdʒX[t`YZ)`)&sL)VL\&XX)i+@¸jsV!Gzc`).$4!0p,ʯ_n&OGA$DN%'_UB3h@+ s/ju+T.8tPĨI*2U+L1HM  [\k Sp^?mT/ m-Yjn 3q7d~g%_epf/ ᬐ~ ]`ܮ9! 1 ֟(aJHlKp}*}_#n*}dHȬB[vaW^^/Uh/4$ 4ߝ=$q 24ӌW$9ںkq1B`Ru-Be8tAf=Zd7g<بˎ ܫY;/V-^ O6.ZE Z$"&ˆn̲`Ć}Q"Sь'(SA5!ˎMC"6&#(<ʏ(ō)92~ n؁ {FbdjD )mоykh h A{+4-&~EN, y) *'fʾP^Y*vo(K@2=ݖt㳑_7c3ږI8cNLMI5pɅA൉8.`fdf fC-[Yy$|zByQ#2[֦v&Rm=Fcž!4Faz2 O#tHa+=.2p(e/N,, Ę{T0h$Z [a%:=UO&bFM]'Zސh7"Z ֽ tMA}p莙=W3od|5<. v֫ tl0cJ46|'RM301>`<,k#f-lwպ@Za%Q\fA~ } B~NM.e6`dcn/OrM̾C~6y'MYA%==X/gp"& ǻwo߭94[y~4yHO̦1_SihGn!禋:AOxk1cZ( eF|x˵H=KM:dHٛ%A);2[ a8q覂QMaB٣]aGo>"%3ph8M;\}E&bF8+v9e{e4u5]Z$Rqha Po!brBn7e̗BOWRo' !,)~0[Z^SVu(^YKQT~;L~&~0;|s|,֫\#X_VՇan|4/YߋN|+^kg|>à^?mIj-&8#g_+2d(bOg1{| ν0kɿ2W0}]IwxIQꊒ4BӨ&$$m,E}R>)y [xw D-MІg< 8Y9^8xNM[J5fRZ&Nx$kN7`q W]1;0>c݃sڇ( _#9]:w-g#a슞=`z|ͫŝ|gܶβ~Ȫ˦eEQ'77Ww_MD$.Ny^vIVן/r_w<@_?D}R`_P]h,?dWPW ynFʒjKɫ緽߽kyt\(4߀=8lqJQg=o%f7M/" $΄ 4>?0̱nÙ3T30ґ3ToZ$Qi<^q%>1Ĺx]/n1Fr.Yo1L0zP<,L40,J1 #6V@i KN>)>V($ZMxGu[\jg;,5y2 AO8(t;{&S 1W14Ni?HοLwz./~ aֲ~)7AqKYlj_lUmaXWe{!~~#9"L$.G.nǰ"ޛg})=.Ǎ}]˝Ror;Wbf{ryyB,'4~YzČ楇>BTqݿJW>Qf ?ɨ;"zqϨR$|{htrt][O\I+/;i-.4Oٕ="h0⯫O_Ѽ& CMO?{: zaV&n`\fЁӗ Ah3 ke\*@(ߤX@dؒ4}H!ܣ1djOǠIhYg Ӛg%%z1-,M$Zuʓ+b3P &jJb,\Ym9 T69Q˅ iB)#Y|2r N2Dx lBMȬIIeZXeM5 E D' >jmI }9MԲn2qe>:e[d)XRlňS;"@LNJЏNfƨh*XgM c-#p$o@V|Ժ*!QKHFVm"[=kJ-hU4&Oצ0d-&C qLަj+QB5lL \DADǽN?~p=>,̤AݥFs4 ybWDD&/{M!Y1YDji8P,nى#$uIlD < Fvϼf [{/ BI%;# ].B6'3\I?(싲Z3u?53sQ:mo33o사WEyYՙq6'dxj-1djE8EZcCȶVr{Լfk2&3f3ԉ[Śy$P. CsQT0ro싳14py5mRآg 7^hFrN4P,D̦R4%q 4H1 I\]]%|QkCQ95_(HjP:J)[QClŦX;)TA TlDeFSP2:J.0PnJ x`bxJ懎FK C֭Dk=]mL̨[c ۡS4(h4GM#,1\ZEt(ڲ,-eq'KB5z"8#3`h;)ylAx.Fa4Tz[thݔ|Nϧa#z,!X[ - YIS"8ZGcbk{Z sZyK] O zVǸ,H=O(ۢW׏Gҩ7%JhRrvQ)R͇dEV(aXCgsC1moj&H!;kd["eMǂ0lv1ECp`m=.sRJvgO Oܓ'_) tvOSn}"ce Db2mUPH"\FiƥV|p: )֌)4].> VLA޽ߖH 0fX#bwqdmА:\#Wl8@Q.hh_fE{*7I%vĄ-a0IhkEd?i#Q,AA,iBzMZ6 [vB /r.67lL\bl!CHYA3ӛj`#huwG/_3#YQ_1?xr>l`mc$ ?żr< S{}Vg+L5+\uYU1!8dBjH{m楠) ku1g"l$ױ_yrmb90oϋ͍ ʱ6_(2+1ijp܄ E - R)\Q,{H!(`VmVl`vo6a<\:n.zVDM^y:[\Qү0Ѳi&0h-**Zl#Z Q֠v%v"Ve%*E,hj1k1aVb3t|/9rwyEBGބ+tg^~QbbFNXܳUN\]dNrx%mn/t..N߼15Z6awo.cO~.MqtAqVn9/2`> ]-o>,٘dcܽ m&z}}7ø}ZVe̵m6L1??N;PJ I?+y?OUeF6wZd jY1ngAJ0 Ohkƙ,Zmg#Fcܷs{-~&Yȴ܂Heuᚃ' _whqeȘ|7W#wFBIP'zPcE ҴŽ:ytTgI.(DH(SٓzaXd*\ {1 ehcBbd B4(W\$E/PzA+`΂)Gb/)~;&mWZnb׬}r"*D0i>S1ȯLBv.{6j&Μç@՛,B;Nb9W7D؃/"~u4YY]^}$[tM{ -h6DgxdkRHUi!ýGtQkCk@OZp'otW8D/2ƴ MJF˘0ݿ*ta|.Da͢.~#lG?o2VكʧY9% 8[DVX;t+D y ֺ]-Y3ίgRi"k{Y9ݎD]L`){]>=9uˉOOUF=;uoZ{Kzj$gnTೃ3N{t. . ڹ'b+h6=eB~i.dg4Y3ʀX]1U2h|X_ ca/Wkxk6 s;bL͊aPrx꺫0[cA+7냆h3@-{n,=vhiK6y`gM;6 yv{b{"/\邜zB~:Qnu@_)ԟ/wgYG}B/_渨8t_ { N<ǹ/ҧ=[|iݞa6=?]Zk8QN%'xʛ Q&"-R]Yl.s<7Ou8)֝x}7OWvni7ɘM7@H%-fz}h4Lû ";9w%AA?/t~je_"#1SaK\@6 zܥ[7 5g9oV9dt_ZKUS)"wYF~>|muiHnRgj;L|(j\ o&x}7 oXzhM1MNۍOM֡ycn?z9qP_Se;Ѓ~S3cF1wn'r5nXХd1 Х sm;qJY>ubdBJR&P*ײ|`F\ ixs_]oGW~;}X8lو].O1-)$e_̐CiC9Jbt = G`{[Ȩ|NWJ: A")9HޖDrGc;m:UlwJE&OHlQ]uMomD@ּ#M9o"E%ZR É~^ 0~h1*Jp"uu<v ES<-tVjuM069:'t&'ު1׾N1,+;=Ԇ?be' B jܻYۯ "厽r};)IgJj}-<^Ju(pj_O7q.pO]*Eݪ"Nna ܴltIq^{TbN[s wR Mrqi顮vnsHcG~L dT*>2?NyJЩ"BzjΙX^5y2:BIHvYfsʹjm\ . WE|$ns=ߨ|8MyFryF`B5"u`.ޞg0ּS! mȁd;IG0Wjݺx$> 1'U:ahy o? Wbu5WgR7D`RgcU֙P$ZC"TʽGѠ7r,i'qpLclK2H娲b$򬝖4Xkػ O/aN^t"W B)]k[*w:}qӐKn(')3L\71i3@_ _ Z\bYsgX Guny-^g_2čWldBU ;JcY^`tp ^KypGC 8rF;Ϥ!Jb229#'#=Hx$α]y!*K2z-vQԓ-K)|N R$ Y* Y*U'lH9sQ r g"PL]GGfaIxFaʉ*!]T94Pbf7be z%|P尅PDf2`kbEEDiB3e~.X}kOo_߯_@46eV+sl2''']&A譙ӣyߑ S_@睻G82)J) 4m-s:P*Ʌ Ranx:ܑQi֊M/yx_(Ui{\HS[Ü;ĐlיjDtv# !I%,Tu6Ց{ScT#'dIثw%EJY7/69w߀DKuDM Z`4hL؛GI˦ H[׃B]'8a௯SD14dZdP‹Ƕ@vmWЦ% Ha8\Ԭd T.qMoyk 9: J h/ЄbBs ͂N qޭ nJ!ʍ5ʻ+vP[1S$8L)'E猤-Ti:*DDVyɬ)%킡~KK+p}bYX) > v/(RLȧ| !qm0p[E@/)pnS (<"m JSQN}MuMJ,`w8%þ#Y UQ[FeiM#8 5˥>&(!ߝuM3<d|`*. `0vgIG:IޠlKWNg+fͷ} J yq3|7^%hnx#o\sn vz[.fJbƻ\'\ξ9cSno1llDai>>!v~7sa)e?ɬ(w&/[pIŊH7ELݐwF^7飂sJxK0xC0 X-Oθy^OIPJ\].f󟓘)dvjvv3ؤ&o}w|*ڻa+wsNV|E ;J5%L:hݝ(ZFꨣ1!;IeK3R! A= A=0O OU*h##H4,@G,W¢H{ *_K:pCY0~?Wn^Մ)+'>)u=HFRcb˽cKi89{bpb *#R[)UҚNzJ)iq"ĸNpJ#淚Ae  MnAE̅.Fd%XTc2`MNAv0W:'Nd?rC_F̀r$J OC`y0ΊZp@N5J"$7]8D!B<x1KfNs6O;Xاe2 Pgz;0)F1lfbH>V)_ *p#ɑZIR}~wJ9{ ' O0'R9N'my LqY̘C:,j`#VU_1w}}i̺ʛ▱Ӓ)E 0ó}lP0ն8LnX]NcxMx~bޝ6qQ;N|GsM洣VQF) AZaLf; ]Xd{MYt'=v8CX =%9r>~;65aGY}IDD눖Rwބ>X+;bA5*w6ǎjI#pjLLT*ޗ;JckuJquÌ1#O%yKb__XU'*LqXzx>*>E\o#'^`vv~=|qiWn6<TG4)60@@. XRQk)K:#)%IyӢU:+]xt(ߤ@T h`\E Eg^\L|w%p*-~̻wYI;+]]fulsTm˩Y˩t9uMWRWnp%ͨy'=6@hݶg7g}?L\V7WlX=k'8 9 q y"e{'8 i0'ܛ̰T{*6VvIޫ[ *=}-ŝj:!XCru^x7jaKXf|?4zIձqZv=w+[zz ~Fx :T1FGr&];_J=Y8{oulG[*7#W/8Y7V@d BXH,HZ=3VthPK*~dHLlF{K-]L6olk+C[O_Lװ&ݨ 9p'1i@GQDO*zჰD!f;vFKJM@u_R)('0 TTu):)YW)B&@6a0Ms(<8:w\9 4Tlq~|_Bj Idvu.:U'"!{Cd+lA”E. QF'ؘ(lI0EN}hkRk%%ec3hX gyUF2!!jʒʚNà%t d{aN:` d:bYV-tYp )y^*C¬߶t j֠v4f\EC tdf!]L>ڳˏ*Hѝ5þ5\ʦM?\uδxtΧe^'_uOy_{ԌN4ޅ.oyKf)]| :jf^ /5Hvx|m<._krǖ%Տ&G ugDݞDbsQu= I}j+\ o_8?=8oq/^^!/y7qq9fV+'_=_7||Y/|ۂAṁn4&ܐ{E!Xi#M#P498񃸮Eq6vjb+c/]>\jQy0ܠJ ;)F~zQYX^4U4TNX @轖SړISjO}|46K>޼KI'vSksk{Ovxu1#^MLoe╳֦{bD NMl[Vud_c]|"FaHJVk\?9kzDPH!Wt|߿F45LP41\~ZbͿ[kSƛ]u~fkr=oٺ<=M_̪lF`ph)h8qqܜIj5$"hR2"[KdJ<B"x.iiIw 1%i6x Bڎ+W& eoP,T>FȑٱZ"8摬59fcTLVPȊ e;,s*:&4&Xt}ثosWKg׹~pˌL7 j [.gOv&cH9k~۰PN. N>wNo'F<ƿ(7 nBrKUp!s4RDӄ-˭}>X)evF h2VN, h8v1Sg/ mn]w7n VBw 0B*C ՅojHU=BhCx]mD~hn_'M{:xUa*< D?Remـ0pM~z? B Ol A5klO̰pH83KR/Y.j;br77F>j NPR"*^oEqޯ}mR1dӫ˫; '@W𷠠2PC>M )#ȴsnN2FnMY[lvA.Vw s5Ǯծ]=]%q&fuUv޻жIvX=5KoAu{¶Ҹ1pmL 6P_&|h0niVzݽkΆ,7M`$_LjV_@+U!a}ݸ~0omdʃ;g {wYg;8o9mNP%>fp@.Ud{6asq4&JQaa֞o5G޹ ]?rmU6rC;g޽ѿƳk'f&%vİA ۝|\rbO8!Oo~7+PS #K5ѓ2f`ݏȌNݒ혱a@} LzyKЍeЂ;{I?6Ovdl'iXL` ;kzh@jw8H:S14mt!$$!NQnݑN-zc֦>4jKnRi)& GP޼ޝgT1tIl1*aǦiF=?Do֕-9 >g~iqylIF2Sl'G:T؁FeN7WGa`e>ɾû[;e3am^/^7ClE*/FoUŦ!׈yxLob蚂5P̼^l,Kj`S4l)cJRO~8c5(i%Ea otؐ)OXJ0~{zi=P6%Қ ANef*ct"@d;§͍ņAr~>_~8iۀ.rz*l01Rl)g4:vYe>THB넹HV;Xt!LNFE"Њʥ7ΤIQtIh0GHF^7'ucVwpLS2e, WR[wEH!SE&Q@F;^QL~.QZ*lrMsb^]t9yd dh4#dpF7bĶm4xx0v%Xoܓ!ˤ) j'B$;`=&F$SXmԮ6Mi" pjΎ/?wHu5=$F#F(;pԆq&bj=@6IJ['FP>gu ETI]Kl FKZ)'t#C6Db_}VCb4iv4 [9524"+f(ip& x_f6MDvRMI[b.o4*'Dd3IĨLbDžxzbTCb41bcg="NIi@XatC!El 2Y6Cwlt=XALR<}GAld)ǟ4ޯCM J 9zDF=~s$Fh6K;H정[;D,|;"VSK՝ޑ"'>{R !A42$ H5$b}!|PF^>-II{*Zf¦ARURfӫ n686e&6!ShvKaAlvh4G8k&#*Z;}m̊pt/Y|t+ ǰl{}htAžc7.w_~vr 9_>#|Wfۏv|y֧{!Y5 VO/X,..oNNWwN_nG?Npܖ*>8O?ֿ5:ݗ//d  |.' fډ+œH*E:2Y:֒6keqߦ˷ea X ;(YTbĔxKPߝdKZېVղEB́+F&0"XI"'IXE"PdӽԪqfxVG]RKiogX5 KE뛗1*hUxV*!A4 Xsg0$c7_DPS%/0d ^bř"vCtM :* ,f&2Wrl7ˎMAI.kGmҚI e xei#g Dːc7"AKyd y {b(* F]jڤFvaE3[N&6vlђFaHB9J3|Ӥ ;o.VIWWtV D?{6]}y$ 6'/:Ȗ#dO5%۴DZMv[MVW]fuy3s&;LԌE%TrFAJY81Ao0Cko$S'BS ^)`R@1chPj̓mR 4Zihb\Afq`pch?A3"q嶥]j 飶tʶX@IzhQKC:VtJLsӸ\yɂ~MbV<>cs<6T2Lhy(2)(5M4Ђ \ʶ"Y{UKQt?wTpYWDAfעz'f8)xJkSNM;ڞHow^17ֻ;VApC0TzhoWG{(K\Y}]gCJMWpcջm=1C) mTתC# _h) 7PP#$(e4Th k1ooiTCilTP;jŰXmJ _g ➂*Rkv"=SRK uVT8t}hB=: OK3$."Kv ,ۏ׶ɼ+}۴_}-t Q>^=tMӥ I~1\^\QE W޳?G׍*EѬ??.Zmh6>0iLW&t`t4fd*O$I:5%W9tMv0>.ڿLQO>t1ܤ/*v`3>B84'=uQuvs,bƐ-j9)({Qdb_mp{ϫܨ 93WW :CDw&#NsxϼSx* 3鴆ƒ#Okg`bP`S.&~b)1Gh[CG`ƍJnǝ% P%O[iK $HŔIb%c.HE [CF&TY%2IPs MUTJu()D,εȎ#,J9Kc*!cFSH8S?j ]5|(E%MyiAemBe"3ЄBOe'8%.Np*TÖQPc=}Sje(NJ:Eͧ-4ygcm^O<J =V!E8돃4 `{͇CPx#5AUNp2#%vdN,>$q$Mh Q̾#}{E2A.{;m2;bb_ϼޟM.VH3o!vdyébެo4d5._y `cGfCci:Ց}}A,f!kHUÞ-&6^'C0j0DpulA7F;:2[]oWW.?z6y_m_lyK+}ВM֎ U_;d@K0/$[ ^|YoS'<<¼]ZzN?+Sanj/֐cNdyH1P+-E^z/ cf7ll gl) 7OٺFlx_N=:JOc8)ah|e:hb݆"umݼun!4䙫N*%m 1HSd :4&gy E(i֊K0Fx" (h)k CHI-tFH2)EH2Lh rMw +e҃jyX5P0󕁋q}F7vx|o !\E}tP:zOr`݀d h|e:hbSY7oF][ y*SGMQn2pJ =hb,!]2w@C$g @ =p ;:^d՞ru&]85 ]GO^}Vgy|{p>=ۇySsü[Ic̻yAxέꮲͷ4GWfn|ʒS\"hO`85NYm.y|OX(u &WL*kz!۳G*`W0㿚1`$ Ǽb:Mvl34g5kex8]Z-o6Y>-{4Ij8н9ʟ{ 2$ LĔO24ThL[M"_cj,MQbbb 01@s#fKBK$k%y0=44\*n "GCj^h$ӘB葕 S]2]6h"gY w,fDd%JBDuDQ޸j51Zo6Pը ;4 c&x#̜D[PLsW-5v+5vݣk}DZCFh)#CkLׂvpaP0x6 2G ZPm?sÚԣ$)vփ?ٖibz~[yQ&#FJE{ZFK?6dvB:/սXMFKP^>zR 5bIu)ܟ6_;C,ۤ t_AzrT8@PVCVt"-H 8 ̈B4`@/7,bF}^;!yaFjFc\\Pͥ `)}L :Rd|4-g E#y>+JIk:{`I~z&MWDɔū/_؎+R\ߜRMm0xX<) rdea +O'}xm&z=,E0F]bެ.r0Q~ AiB$֟mM[K2/ 1waj֜1%9c Jxɜ!2*\ TA)Q$&Jm=i4 [cH- ud{臓m>{hx0e_!,p'5y}n+.^]o4,c>d(+/J9Lh6~HQzj%;F>(jO&ڳk/u.cLk&<P=}پɴ9ūr_J&;6ɃA"N U f:%B%kBMuȅڢzUջư||VI~1b~|=( B T0(EAPm>~j/"(g"Dݿ<`@[i(\,qwWU Yy|l5ϿͶh Sc**6@Ǐ8.+N"~ZkQ*H))i,r65 ~.xN3Y٪sy\%tCF#U߄tt[`?l-;փ^FO՜rbȨͩ\5,hT#r[c}؞,b]~];'j Tj)Rwn(4ᒙQ[N7-` P;'U=S0dG".aSUu 3ºrSLP+Oxw 㴹)3KyF9O&J@gՍNVXTy&`B@R_@@ҪI*sʀ$9KA SԍD*^Nc 42.21\}-Q9X_շ+hρTd Í?A>AdFqmm xۗ/99c>3cP0Re>Nr7Qs-$&j3[?;7%5iYs험 x~0/oM5֔[Sz^6$rJ"AByDZC&YirB8T+J"SS)Ley_r֎/?2}{fe|"@ͧl<~xkUǐ12T [(R Zoûsޝ܆wމRU.Y"P?{䶍J/rv92Ue;n֮8cj$Y'THDF ARq#Dht2A3b ;[aRo>Zw싢;ۙT4eO)ۋwnb6e,Ocm 0K ͷE}R:( qZ>P`%ZS4qegB E)&0ɱ|5VZpu$ N혺椤;j_bBAơO21 V$V;#ZIdqGg4eI(ʒ(yPLq1'UyM_3 JmծJw歒׹NwfsRT~gc_uk+lF̺8kl/v~zּ?b;zip7/'ph~?Z7ǹKLmܟ~z1t37Zz2q} o;\};ÿ3ۀz:uOsJH*)|_ gX@,8g>t%g:-Vi9sԷ>#` g` *_;D|@B7YE6JaǀD1=.ӕC~RI9_5xpkk'0U;AJP$oMe6Ica8ÀU;wMdv;y {'VΗgJ%8Y}bL@[,NCQ'qZA_b+PJҰcR7Yay"A2AmF-ࢦ5RF-5E+Ͱq2bsƸriLIn%iFem)y:ek+q-Z;+; 25v0^.96_zOh϶-onyFĝ{ϝ{ϝ{ϝ2wg4Ef~9h<ѳy8ok0n ;x{K}'!(EDmb^Bᔸ3|6d^/ӉRq2(d/HJ}| Zdko< ϼא_*vZd2{x߰L=0;7b7?Yx9uz:!xܮ2 hv ftzoo&Onl~V-;˭&4YΝ9OO9nQiRaDZm?l|gg=73jx^+mz13@9,,0GIRt#ދfIq4x/\+` ] CRZ b@VGبs3% ,QLI9cT`aݪ"o]jV%ԟG{:?c0 m'hW^N$~CQX\g:ejl_gJb$>DBlj7(_((~ƜunN{܉{#&0% $q'sZQelR"C noq;!J,$^͚=9UL|Vc^Msl?ش}Os];znn[/[c[36;.j{1e!;E;{n._McxڋQogvoISܤ)ZIny}OOfGpFněd4*H.iHBp)zok7Y_Qb":諸ݮh:ٙv&4T5!!_FTh_-C-щ}GvE/Lpg-Pք|"Z_u;0VCx@)_rOAgacrdr>;CJ K*7h[Sr*lֵ_u/ߐ(C 2LVA<D =89 m&p>C.pSg鎋HFmا Ț}efkZg<յED"3EZ#;|&*JaVWERI_`|!ay?W/ңBz̼Xl1f  s'wGMhꑇ ȷm]_ '=.!QbRjRJE$*?4%PQ|"eUÐfTf$:- kߔ2_D,"HRdJ*jC8jap @FKʘ$e0-6PT+T -`qY&92D2f1Np֖P!4pƭAÛQfsQ8`jʻZ޻}fs4Nvո !%X#Ycg7ߟE˗f%G gG.b3U:$=_ߗEsVR^]o} G#},jm ^[vr0}OD /k*QLAL![0m-ʖ4̖1r=tN îŖF%`J gצubXߛ!;=j0ub _97)w;O-j֫zu7YWh?x+VMVEծ|?"3u"Aݎߓy̲$b@ftu oqT&'n^n>p"XSr՚,APd&YeR'v+a3.Xs  ~.ZtA2r{Ы&@QJeC r)%p h$TCÀWvORRQNNq̧"MMƍuE@3I <˴Pd-LbFqIffRbBj'JU@FD0p=þV;]8 N9;n 0ܤj.S"UVʘBqAi?'Z/W Ч+4uN+kU 2%\0Wg3"ޅ𰢏S[BSg>\ U6h?w]}"MFXu4J]Keiw<lA ̶)ͥ] K5M; 3 ޯj'\"Ҭ G<($6a inÄg3eT?9䠇<7$z98:?8!.h"80vy@-QFV3j3 H[9 ,U-k&+Ч>~XorTm7=03)'-rNt=D悒,*C8ofjLS(D*|2ruFH,[LmTa3_J(HRCmxJ1`Ihmc\j ,P(E'vwD<7/.s^AKێ ȣy9G5h☲>'э$3W)#r҉gR  G'SRKz{lGǿJxu.>|*LK_MhM|}V>a|lP .umҶ#.yO\Bm}sW[\m-Y9nϳ{>ՅTs<ˮƅn4fpRB{Z;]W+RY߸UE+vȧiEI%jIuzioP{۟YAw cwx 9OO8*@%XNa^u 8n ?b ^ιxz/A&㛴vx hkk_X8xZ_J>i?{VWY#bBmP jT޴J>N,#a_TD@Y-q%>\%͊z2hD9BRh3`F0P(M3SdLjeXS- g(BBl sZv\ԟ5dh؊+9MhXJuVFk{!K!?c@U3&eiT6Һk +L* ' Ri r>DyI0)݆|u4k^==+&۹ű6WkwPbq7\̵wޚs6/hRzb c۝.~oPrp ?/b߱ynfv?;`bZL}xR,=#ProS_vUbeM! E4F[&^ P$>KApa?TBaS dƾʦrͥm*҄|"!Sz+ ^C,Ez)9n@ $F UvH}UX[(*l1}Aޗ}_4n88m^=jFw#҆;{'l`@Ck05i*dPE_Q9j&}'bXT45_c&R&5U&=U=}2L fP Y} aRS1dɇ'bI[ipvqJ6gg-TJ5+.1F\v=K-*Ϳk|InELҽ:8 uxyfg^ CۍY+ iuiI!iw%j5? %\~ :*qw~$Q䞆kqEEW3_EfZ, r~i1H*c{mO%|K%)<9CDyC0 jskRkmƄ('&*QKa)HLT͡w©ʹfX*KüĀ.0 j9TD̐hʧmq#FQ+'w\%l3=%9)d|k7d)FT[7V.sفn|B>)HM@vKxa 'CխB[ 큧[;([v hg%6a=DV 3QW׭60sBnYv+;;0On~A`$8,6c[f9]L~(8-uA]A% H97Ex8 Ǫ[_F)n9rM% `ޤČ >V+h!7TV pxs}7 ?@j%p$kPh%Tj pp %,gZTiʵ+moߛw&7ϛBeOlQՊnd^˫-fvo*W~z)ͣ) PkT}T<>xyUGR(-k]LiqZCR+,:4**BY{9&$KTXիVTIիV~K"bgeVM*"A*w5tbm3kZ(pDC,P "me *e$boI߀jq%KiQ6d\bE۞'vN{ ^dɞQ "x߉65WC8IpbR*MMHQj%# 91@I/TUCjrERXb3I1b4Po{Z\\k^1gP yl<Ŝ(ӐN"VrQ:2Z_hzY-݀+Wߣ\iWN'4veb1qZ{D޿ wy0p1m} _ s tό}<m[…nG7)uW.K$_ ԯSG: )ujYrtSt@= %XZZVb}'cRJ$]@7O,?*FE={ה pXLSz$4(6& XغPw޾q\_U)S!(1je\0;B8gb*j  q f=~uғ/$4rOIN0h[y’w~^z;% U`9~Yj̽(w ]=2e ?[з[G2!{? gӟŦ?9M8b]D/{[E+]ԫ"ZNEmLbH}_3ODнER-mYɗdKj%XƣA_d-.뗷ZȒ7Nbc?yZP9^oёl@UZ_|.4sB?PGE^?4$f4Á#u ~CCca7cIjZAة(ޤ3.j,yT_$#=2VIف:ی ĨITfgӬςo{l5 Q?mOeiֱYU1 9֌VE"zoneJN'^:ߍc^c]H\ w}8ܮ;_Ufz[4O}<ТA0A/ X$Q$ʨ (h^<=13"=&1Ҳ?iZ|^u\*~ت{]y2|i|q%̀:˭ħbgN|ihr T1zin*k)Z𑵔KXk.gXр /PtJ۩&JšKY=aTT/]kw>kwiJ,Ͽ Q2:V>…)l:S3 x(#iJ%k#pCC ; 9a4BvwzoGVF7B|7_nK+i0Y95} UNI@=1^N̑OSbo:?!QDB# ^w ()oChjʍvj i}!1Rx݉H\+<@!5wlNc&:`Ipl4CB$z)S̎rkĬc` %RdǭT?GZo\St99rZq]MҒzLU_S qYKB5Hת*/L0s]Rٜ$V]UR$RE\{,e;9;;0ޟZ 7KfiBv:6zX8֭L'8iO:"3&VciZEeĎ=Sn8S"7'p-D %DFԺs&qS.'j֘uPijAP7 aلB`%Eݞ;TnF.v<1s?fi1s*=hwjB^`0gio&XoRn\{ AVyBq4- F?:8A_]VA?^T}ͯ2;O%om8 $ÛQl)P+1ìAo_1=WY9,`E:q}!L X P#m H+i֌$QRRdA Ήy)כ鴷 6rݠ"P$LL&$eJ$\)9) 39Xk P^`ޠA|UCa6Q͆fovۖ@ %btt=g e/fuw.RB`Ҫ.a®$: b8FDBj bǯOt6JV/=zJ4vKRN:H-rig}JX: X+ bd<ϾDc%5!&`Vjb)6؁b(92nFa=QMt.7 rRtC4 *Ȕ+9s^Z@pjDI+!\&Rr*i8П>Vr!kL} wp(5%cZ(+6xO[ ҩjɬ{˝n&u^, pB84.PZi%SA5hbͣ(=TJܼ99؅4:7Ibb7YlbK9 E'8P!h@SI 'J"@r5 &j20LUIrn!?WQ@vV "1+2k A!V'*TilȂTfdRX62lSܐ\T݌p FwwfUt ?E/ݟӻ#B/# XwsxPurb_icUf?<8`#Z#t//i_d4p0z{-/Ǧ(ȍ"n"f1XLjq7Q:i|eVGK*RTܮ'7Ov6뢧J#[e@sxn-?s# ayQR$kvC, bŽ }a&au\ <ѿ3zbHPR#VGXҚOtA : q~ᒷ|!Tw!S7 o$kڦ?RyCJ2mPVɩ*p(2-GCK*\iTEx̨* QݍU4d~Gvy弍 pDBi1jTHvԨ4@+UԨ|5zp";EYEjE%73Uy>I aZCwL*tBā!;Wᰣ?'k|d)Rw)ӗ?Ȉc9m-ܴXq7m$cWTW1};M2I11fѺKm Ff6YdSdCWﭚ܍ rO'מt^绱sx֖l\ w]msF+,}܊ҼW\j$6)ռZPBRv|.) H $D|w~&op|p8X۱L. R6ZMms79D( SxߚڊuN@#\SIuMtUŮ)_v7U!|!t&0äזlPK\bRywRX  ~Z'!B*{Sܪ>vy9 Jm JvyhkJӐbT[71Q  VgZ/Uɐldќ:lwZ]+ {3,r: ֛m.c0u}\@BrV8Vx[ׂiT0oԦJ^ѽC0^ʩa1.XQLF!.)U;Dww=*#t3#*LïߵïrˍJ縛I- vT 噢 O+ xR̢'gԈTgY uQ{HU,gN+d+yhAx;Z@NB 2[{ hUƽDXwbYoQXj"eU.'ZQޣHuyV9sL{jtʈ.|LF(&^nN,IY]ZLSIDǗW:8BBEdtp._ Vv~0WUNReTY$UV9)WVY!\2C 6t_hbD2͛bЧl6-Es4,s bH..go(_3Ӱ^{x׌jҳm4JӾ9ug$Xu5?0SA֢ ٪@^hU.kŮ9m D!ߪ0oܐv!ܪp I 5+ Da(!(rP  5RH0R;CA{IX1@p8Y-qR`e y}Φ5M2& (sĔ{Po|rDsCFyhse-mZ;JM⮛2R4^sl#k ^jD% VBKpdN#f:PaAp V[7JxT.:ݻjru8І{(&Rh4kHL F< 4*R-"!2NRpKdb`>,f~l3}}ꁿ\nY4( ??^IR%yホϗ>xv"<,7D`dzb_,+?;O?@hT/gboV%~gs>H*ދF>XnX+!n%Xr2t)ukN%܍j"LGr$4>DBFyǞfH:k]LVS pILl^)[B@FI\{QaeZ ۏ΃ŔZ KkQ]*>rjr?sԨorDvJcՎd3epڎp[!HN0L!H2WQX Gn4K*y内cΌ$E*BY SbQX(k_ QB;r[|^/R$ k;-tWRXzM.;3;I*ü5| 1_M0݈W.| whyjj՟ca^]=1aR,5I"a-%" ֢Zp͔q"8.uJ6՚|*wȇ V}_ =;0C_HJꤙA0qK# @h)XYȣ iy3hJ3\~J)/ѫE58 $xN7`&% ,G0L0cs1PL ׄb$gAAԖ }EO8x/Z.|..OJGFr~0cFYzֹSN_ڇ~;/fEx%{!Kk,-7ּH`!z7a10Yq~`O,lt,Hj6tmY6I,܅DbQL[]s$YQ;  )՚iF[%fB1  axQR*IӡjtFGIͮ-@"rGI,lD@rSR$ @م {oIXj[hVr2 }B59(59(*z5c#2cf4 Xi5Tkn ?}(B9ۀ (}FwpߒM\1`I5ҾH9n8ƚ=GC'g+,$U$*oL>zSl0p3t$ŷ?[ Sur2Ҍo,qsㇴ8N+_/)ZiA)5tT%bwAVloMrUPԼxDIvН*(ٻV*,Mxw`P@:H B VAv/~Ch2Oc7;#4GC{"Q$}}?]l4*JՌ&  FiAK^jɛŜqy4F"wR;i7; a8bV>qǍKNSJs"!R#cF+x(b!@\pT `輸gH_h?>? âo8d.T;cx!F C`P 8QѠT>hH~S8vˈ(rI K=s2JH coe{D@B eA[QJn I9L[[%f+^[DAC Zy 5PEʇȰmRX"-0o4jY&?D D1)s6,RX7^ZMGJir:U'h~z@yײk%=9{1ep4Ku \[dFQۙ @Qh1xZ_rf$dp(z7{DY އɪN8kTg=b5`\1RZ'^^ھ6`5dWchI| QC N8c;%XߘdCu1eU/Ə/ӯqwAmh>^ (!ڷ>I'i$[W7ptp$YkLDE)" (DJ:Fj$gE/7# l:}h;`[\RP<͙,ga=:gTR*_'t`K~6ԉه b> A|1tSX#h r$wj~v~y0z0 b59BpqQyp?`O._=B?XG!,f[q{YF೭u>癹oW?~wn:y,:WS+Y6S|>a6r~9oKcvJoa_h~Lu;>~9|? 05`F5CʨpЇwSw1|ǎdz槆p1b"!Q4q FO]Lh5/x&n :!s9ZH\~o4‡!tvf&K}Vy߄8=S#@8S,Tk\90.FFZOt?ݗv S[ce["෤۷Ss+ABJṘJ7-*tX坙 'ӷ-6J{D,92C* 7;nG}F>Ócُ#L*bqXՎi86;\ю%݋G]'_Ɲt>BH(V'S<uɮۂɈT%7SO쭕)R.u-Qku+(6 S(e'@0%-Mj P'Aot4#觥Y6^iIo.0HՋQe,$C @OK b.e7i\\u@MR>X2zINDތBƮ`oo ~5*x~Rr{r*HT\ϭBIĬVZN[(&ϔ3#ʷsZN/2wnPW9{q0&ÃBW_m^)KA2q#E쐊 OR* BY :.{숊9䍄)Eo:mw9/6Q7#lΒ~2Ifg/[MCr57PgO-Z~4IEkl5/#C2nEjCYGvdu_@UW cDk>yѯ_ `HFv ~-BrcCĨIM.A+. NUP@;:"i#K/0E0oekO!c;1MP$Ө#$_]܄ uq/Q]SQYzkk-ܫ(k(eeRL»R$XпCYPA?02>]F2]8mc> /  a#O P:ZDe/'S^js}Qm=^$Jnӑk~2j=(X_Xcp)d\ߛ>űBabl،r,ΪXtޝu;oNSە;*i"7竫ݜu^7̊V{޼yjz,Bo-4?zS|?Vgb+o̩2nnMim&߂(>hv}߹d?g3'[Lc<2]vs!X%9ZNH9-lҕY"˜u_P.oKo HӶ>7ȭv౹|AsY:n17д& tJR HTx 3h*[ g3αTTs)cˤVq9OҌJzM'߹ CE`/Xn!$_/am} A1 EKPNe'!e IC3BB"Ѹgz]7%L+ ԔHd&uxMx@5!qjgd#k2Bv.ˌ`B 0[!\I.т JTvd+DŊ3$n͠āJfIv[[  yWi8:ESvdhK])xBٵWŭ1WcJfw`[#9tV`!:Cx!BqHxQ\ q:F!x ~Oو OW_psy4"QEw53VA{;#мl1MsOc_>Ƕ9\k4+%Zge܎*Rw|pn2o8$Ј8t-qdƟa#bOU<Ȣ[=nbT1Z;و_T{7;z3 a`5a`SBn[/9c. B`K-6W0 }, 3?-i =:xO0pO圚'f]y{ZaNF`rs`4@ޢs(xM%wnD[EX]} ejr,ywX'=A(ʁ<=5Gl@DKtַ78j~KK^&)˵8+gv,\(N?rm] oRqV_,o`0-/70ZwΚ@ڌEjAEzPգPUrFe!p*rHN穱eqݛxb6ڬ߂ jBf}|C(H˙E m9=>8EM8ŔOS_ n,j_Lo"De(u֎Au?,]p[GOK54ܟd?3勲byl0AV[oe&X?Ӗ =I359ÞAk&r8y';hh*ЫRsѼFN! ճ"nڐ R=_$ n P֯چ_,RΝaA2(Mu^͢/mUc S P~/W] oXPS ]Y~ŶՒ"Dy&h3IAN\*,qCۑC螜<|g..-%*{bDa) kh*Pp8.?]%V_o =M@^ .P_t9jfwbvi|/p#ub +.rK!deOU= eH>b}f PR&1,) $Oӹ>JPeDoIY˶ȗ)L=J{oFjZe.A '= !˓_% 2gw64K~kXX2,Bbm.ؔm&s_ic9Or׺'_ʨcO]ؔJeHS 0mBzEx!M$gJ`UJX+ fV;A&Lq4F2FqXbcX%K1C dj2^3%hXb-x:zj)!"8XsA͂JDJ!Z|$[zUX5D1EawG5'p,/CPǹxҌ0œJV1P$dxzeFňPcfSJF_s쾦1[$WPX-x%bJeqY/;9aR4CƩp &(Y9&D0HZt܉6/>nARY$RNCR}j|MZ8a4k _*틄;03&KVovAIGB{3a3yİ^LO _^PZgai, v FffmNږA/>RgZ1 &p*;g"#qsx 9@y$|{w-rvvHnפ{gx~3r8yB QB?^c,jL4H*Ss)HJX=8kJP|(|6uX |m_'j0)C`_4eάդB~]/IDfbÞ=So^z~4&r o]?.duUS i”tߩr^/|?l ~zoB>?yWr IPf"r&盖2^tbRZ/J᥉[ 8$$LH.osr9_WZ\t0W 0i6 0_.+6A134Uxlp=:rA•3DݍVk+s;J$q*+ ]t.5r~TwΗM8=*d|1b Cp/=Zq:USߟN%d2wm_H#@>qnQܞ6h~- dzb;$k%ۦ̮,wS`dl^* ?4)x_ ddp'B`F=Iˬ͹xDC) >mpXeE Zp!6p̰翻֝2#L zZm0y^Zs3&5޵%Dfw{<] i#! &(i_tyj÷}#E<)x+&CfK+uZWs^W!F"_-d!Eucͨ(aDУ>ܚF0EA r.~ZaG5zlBfLwLj+1#vWմ@iI$bRbQ]`s%GarOMKũW$)AS/LR-[m>KZ4jx}isL%w+%R]ڙʘwoՑMx1.=EO{فZ M%5ɃzHwY1O~ɘ{ v?;+P SrUvIdM/fG[$\6@}}Ù ]~jSȟF>Daiq"ڞl15QG \ d(JCӵc{ili{N%NmAi(;85s%3=C. Gj1 jawYI-C^f۫pѾV>GKnFS v/\Tk&: `M749HW60N-ޤm:Co'B܄ȹ.\Y"$,أR= T@7ɊQ< V >RN 6`(ĔjJru"thjUHF]ZbitXVej"Vx>Sah8i #OSqJhiUqM|ނa>TM\Q 5^B _`% 'yAOf &*Q>/p*x;cA76ô%]侟Kyˌ,l@!Jk4'{(❵ D<+1{6BpxF{JSǛw'=%`=gZi5bio;/3snK@ ||p5*rlzC8=,rޭ>TA$RȞW\'{-J p+NmO+JFڭ>^I]Ӿq[; 柏 #d=@S!TG'.FX~8ww Ÿcyڀ͐m[w'}[j46ӻ~ImqdM!:oM1%J10xMhP^.$0Z2 ] 1HSE/KjB$~H[yII}(S;P]DzM9Ѥ.TCsL.UPBCS2ܻBD:ʇv;}4"h.%Cw3U1Y>K۳"AR.Rh$Qd d3Kߨ50L ,48~W9.ۯ9fDw)]sK#@K/0@W}4w,V>Γ9 x[45 };p&<2cꓢ1\wmߏլt?USbij:W1(ƹ[@y8rB,N k $ +=`v> ѽ k15o *N&@ַF+'vBM1SlS BːmB ]44]7@W=υel;xej\P &icwPl>nt~I czЁm% )ObE?FMh彾lh5|ܞ]=S_*[shG)RzB3&5뵋rRkhN SEt k$[-o y|%ae[ ?{W OG^J֮@S64SK0GsOl_X&XWiwIm2C+#ߌzCQ}Uo4I,u}+Dǂg;!f%&֝D!^"_ǯ4;xH&5T 'LnVXH+d [minb*`BnDowTV,xT(raHګD BkRA1ǂ8C#̫<$=gGrlAʂ"c Bq#SV<cF1Hyb.VIXM:avm6*E1Vc@6-0^ĉ&s ~sE^SV0; K@&@(3AjS9wm_/ UH jq;HE4Pu(FQT.eMh-g#a婸E)1Î[+X[ 9muNr=N^ü{"ji1C(!2DQ6q@q1\q>xu߼.L&śkIF^?D# UMkjp☪ 6@" >.9rzڣ wp1pN\Xy; luWOevÐ>8,~n ;G%N;0iURzI,8T]v\۝8Py8=~fX1o݃ Zu(ZVi[& yg ),;Y@ύ,Yv8SGt¢m&D1OG> |M0',) 8usz1WRRAIik 9SQ  ,q{8ީ>Rq;SB ) }륕j5fS}"Bnn۳`Pro[m@zw-۷pR <"4ȑ}~>Tŭtɉa:w~$o@S[;(Et!>8=uBBPOᩗ~S].y@7J[U@F2 *l9;RH2.ىٖ{@l؄5ߟԸ#B+\yB;V92\Ηnq5Z=G@x{[ޝDw4Ų6l6p汔8Rd,IjGnwkB%ih%IRLROՙG=shֱ1)w 6L8B76VgC}”vd; Ģ 0IgTt܆|b],%I ]!XqңsGǞ莽s ٺcM\fʻݾ 0\ѮxhOP4ZR#VڙiX:'Hڻ?$f*QCǻ2b-}:UAMÄ|Gt1^ٻqަ01^] kxm˖cn]5wQ-.1q2 sRSB_~zwo*fP½u{x${ⷘ9>VU; GÆq{a}Qau^9n%&SC63ն?/zy^̣)g߽7GXl z?02b]] z-֦Jr/0CWr.fܜr 4(Oq2i>,^/K\S?z4Uaywf)nIU-t:[ȗ`c~Fn[$7y/]!(Al>a~1;}\ /c3z$?ݠ|~I닃އj9 ųu;}5 >|rd|ޅoݴूWUE/=N&w/ig ԩM#l Ƴel"K׌G\B*?dB($P(Xط޻,.k#ff3yfahJxJs,Z&ϹƢoM%]8Ot9^XpW<[XphehQգ1/%> ع־yp+]|6OO܇*N $}3n7{Pgc㛗FD?o}Xc",L6nWSՍx NJci\ UvL :AG O aqQGTӐO2-wWEB;O![޹k'|.'F;/a ^zm 4?$QgaO2LvfR*Fr.g ~-P.*D?{ƍJ^fpJO3ڙN&/ɪpD-Ic)v/L\)v`IϜ4]>EM[G@O~􈼿n+y%K c~0`dU 'pYJ|~&pjh>ws{;d'a)n(]tEJա,܃V U.k- QigA#ᡉ*9x,Aqkj3,{Sx?'5¬mEgYr2g 1`eQza/.COe6+?7 &v3D} FH!GZIL%>ͼVI΄&zeӈJ~7ÏqZw C#CB2ƒDcC1H0N"CH8df41Y6FcVJq4t(Bc#Ze<*AD!s$f4N'^{R-hFr3ﷄvf܄ /h$(aL<ܨALGxh Qi}y)ݔнL ¿uT(cGc38f/`nlf71`t#m``!5Є9B30kNp `c$`#ݨ zk4[t#6sazڳ_Y8 B*Z-iSL6 ֚60J+g2."y^IΨePW?" ]>rP`鮨s2B D$߿!o^}B oyQ«Y(y|@Ϋ{1E_ϯN@b|IIF+WogWR:!61]>`HVlш+_%tMHR'-IB4ӠOʍY"H* InG`^I_,,.- RF0" e F.!s=%`(ޚl3ljw'Y>^_3Y\~˧Q6D,|j§;!`yP9x =&5Btl@` YRJL#бԞȈ q@4DST$B<,MbASZAY+@,ʫiM6p(g/b $'Zo]jMq=+!V Azn;Qז̅Q:\?l(\ [sxcX25;JXwԼ =NF |_EZm \ r$ &ϷUpYƊEW 7 iܥk?u}ZmB"e2E{}H%/!7}m/}UW;x){frefkCX3((dQ{ٛ />R.5Owua+{4ߘ ,l X09ȫ"yRD'Pɼcd{EW7_:&mG/=kOޱqB>F4a'@oGؒ*JeoѻH})T{rT_KVA=ja.2nowJq@R(kv7Sqˡd0Pd>xX׎[< aDSIQ֊۔=%]ֺlPP:}N(ÔiuД3OYLQ;Z:wE**DnG/ʯ4S#D4{axjH#x&Yk1H,`S+ H1 ZP.HdFOy_CbS9Mɶ0T %tQU(}Q:-΋?Bgş;[CqS'fI U2eri npsOw 2✲rDH3-*т7I'M7E#)tze%\ބ+X+]/bTv f&ETc֞ B8>m hƓYȦ,9B?>SR2跓̏$[lKa*æK}n))4?2D兘o"R$-z~r=F*ǤS1'ؙEs%ʑQOU[K^ .o'Ē!YԞ)Kl $O}-0sPk.d&F (hWXj ӟо |+Jy^Ѹ;}'W}Cg3`L*ͥ h+c8͇.x]P }WOk+0+ѨIB/ȨbVYt0D[ԪaFJQd dtr#57Vj.]~ Rw-ĕc*M1\7~(zr%lz-]1˽SlOf7:0g.~H#fƹnz;0M-o^ ynxTAE:JSR~ů`/QOׂ%I1!bz>:UmNg+ u~&){O9CXT,;!ϻh>$G%~|zwoӿH)E3H$4^oow2fi{ڶ>?9hf=]V}09]ֽTƫ"HVJJ3[]֋9E2 rP .L#_iqE2m]H%hJa%?^M`R# z_]e^|Ϳ{<8cޭbGEXg:e ףDmQXpE11.2'"BDiXaBׂ`MZH4܊HjO (0˰X" kPH J$^SHaBj.Z!g`g)%$2pAEZ 0z%o!Ъl7/9yJZa2lt P3`sQ-V0ɭFBQ _I/M+cR\.5G:kt aT k0Z Z K-O97НR׌,恧@fO{ :AhrX1-;,`N:˶6jjI1YS@}2II9|p%7hѭ;ZD@>LBHurJ{v|5O6zkw^ATI}IC)fxyQVIH*%{]J` @kdc K-k`<)/X:%ĴŒ +[_; wq |9ɟt[$W<'Ik}tR>ނ7-Uws[,FCo!o֦6yZP!9!FRZI^=f>a ddU`@naEC ^Y(i0]V*٫56%,#*t?My:ɛ,՜ӓ *V Ґc7Ӕ_"[h`7A8̦bKּ\Y16t.UEe^|xǑ:שek 34K+m+?Zm#?݅ΪI}Z 3I94bK"uIf=v޺Hvq.&t}ᑞ9jnnFCWt1u$uJU#X߳}ákˋ!'#I?ʊ1R擱 s{;$"w="Գ/RϾen6 APsQeL D9L m0,l‘8 AAAZ_Skgٛ^8,[s06\S}q6OǾux$wr [Ggmi鹗R~r᠕Dbm$pNH#(1HJ:m (_?;mP}--)0^+}zv7oV0=X'Yx 6SOr} z PpWwY֞Z&gQ-Tzxg;ɲڌ9H2vpISӶ^} M9ӒYv6i.F')ZQ:BJ9 j]5INg, >pKoH9/un3`5V'ٟIDjU l|^|6]kMz084PpCS& JpA87Y=RNIe. 7AKN xY0Wo;'or:`1PvnιZEι<Լ|b #-iL%*nrJr&6"]Jzyɯx+陖l4X̦H?rEI%R?IW2]kUOzSy 2KmsbZiaZdg;oFZ!񣻊mDOi2z_ =`Yu\>z:d^/*fHbt_R'C?#IO3;C)@?vxdž=}Bȳm5%){7]TUV2 "y|_dYNd#KIpB$VG{x:3VNd YZ8;֬O=W~'0(BGD6bFő6\h؁( "hO;ܷ~[`O\SPޕg/"rd"%Hx!2X{ަ*+VR ʳ^5VU.%׃U`ZWD5DUNF NsWO#43E=lRdWnPB1Gu] ' c v)֊GDɵlcc?8 [V._=td>67O?!~r[w/&V)+_y83eTV I4!$2 S( "crJ1Q*iI0Hl8A4g @i]2Gp90>M ]%_*BW*BŋICRŢR> s)pC¥/^p3g%0Ce/VW |\*4Wŀ(>S!D9`F@GM B)IHy"D5`ZZ>vTI>v!hX>Kw@Oۼ21TWumټ3u`|l Ybˠ Uer IUIXp!V@`ўC~x~Cg\3~_@0/w1BxT<' ~!l"E]YD#BHZ˻?q.8N3$5yqlN\(pe:a>ܮ@~2Ҭv5DZS"fs"p.Kkc6Whc}n%eNȖ(#q$ AB.iY#I*2H@ÂBTZ`)AѾ[ƒeR+2@|``<6J 8oT(>jHJ_Rd\jIm;'_MjClNc[[4 UJ$0BNQ`j9jtRTK)hGs #hu~vWDc | oR+=4齢S`Z!ր10],/N= zrQj[HaBO=sKNGlwISX!GJn~A!&ͽhV>a Y}eo&j=Y9HyxXy +ӍT###q֍Ծ{:~Jy%1!B ĴA#g4 O&XڬO 53QFaS1LA "&i%E) UJ q択7Γ8GGҋa87 0(tc}:Ɗbi*J5NC#T8KDk\5E_|:v=sSh$M *IH{!7kde,Z:ٸw ܸO0=SfQT̚B^9 1)w!z@<8y5J8wr;_}H+@[X+%H cH 8&8 B$u@ljCkץ[R{pg8B#pȬAģf , 8HBȘGNE/O^,<"!$CU44 Q$LjI1sVrow#.B cA%_&b9m |qpg5|9Hۇ僙^‡)z܇6We^@w oDwuEst*T@umJ+Oo}9<^> E5+<0N}g E(2*p)Pҳ_+-$(bk#+ɞ/1DtH)l)Ej.z%Z8nA@#7S#˦Bbqԫ8T_1AmچBo/;y;/xR }&?m;gP+[_6 o+e]5B$m˿zvСxt*' -Yyd;wHZYޠ EOX7A-"U83?uy_ =GWi[TRtkfE;ٚT©"jBg}O*.u[|0`Mc&㹻>v63e1U ٬=U}47; B*V(ݿ(Y3{:2޲ neֻ 3]氺f[p׳L̪a85C8^D.Wld<ռ`6r2 VjFx.0 LaOf9 .4iIJ Ղᵖ; 9] (?/ZM\k tq:-5lgEc}t|7OYRˋE| ؇Y8oiyѺGWm<ָ6e$yVEghjׄFwwYsytv3n2NNC,EPڛO5l%$߶fO tByO\eV3Ρ+V w\2қ_og RvI;䲑 0C(OV띆Goa4K5d~Qlcn) Ce_;rv2H>_* 6Mu>T'h>R╁i-|V@>H2¡tfj7v^2 [ Ӛc747oogsc "C ||{og9v>^ʞhۡ w bZ>ZaÔ=>iB+htnf/q5NF{^B;o۳os,>YWӚ[aNvOQUG[S1BVH rskO5qS{i+<8gۉꢷ09K64k:+E/ȋUrrd+Y2"[/4P}ΒEZ'=o09GV4mG~r6e䤐;;'/XU=ΐ\J;RcPnl00EѸ[y@;H8V$GrK|OoRD-AEAs*w&ߛR2/3)x9/Q{[zkrgS:݈ +/BNWo=Qګ+iJAe[I(Ioֽӫ ϰߔ:J9z bt2x=;V(+_&ژW IQ; sJDC>Ղk^-5!9É>Uu)(L>ا?r>F9v9zP|H[\Gz2iRR_) <Vi9 dIm['Dp%~B㛄 />¨f~?Zg4ϮT[F>bVʎ8zlrr1.\e^kybB(e?/njnp9tûaH?g7 JP: 鬈H"ނ@ʎҿ,^HwU*'$9s*h寲W QįsX ?~T,+ 7哳s'=lv݇<}8]>{K?Qpf]۩ ;AI'"ցOͅ!NbVFC +ժBM۴ Q[תXjkSiުTX9VmMmRm^y)kw`u+4 Qf[m7!P PԒ@1p4`~cLV\TsZ]7$o/VpżU߰] o/ޮ_\̒b)ԛz~bJ~{[ZkTxbx~ƔUXcMߪ_I{g~~{h;VbĹQ?{OƑ|Ir'MվCǹ%IsTZ$5Y#dlu FTVP`J*t$AԂP̰b؃_rE5O.͠[?+eLUnR\cװRM֔g},|+AR`׃:9Xj8kڋx`}1ku'>O)!>P3PsĚcL)><Ԝ2:?ʼn_Nzn.006I6K>Z?-Fvɍo.k]&]Β~>o= 3+1<"t0*xo*E0q~L6pgI +GSGB\DSdjm_cɂu\uKAIc붻xx*Tkn酲֭ 9rM)uTiP`Lc%mg)2_.-^wAB\DdNn)>XTT=Vn˳Fn.[r"$Sݶ_nEdJ**m)ߞVѺ!!G.)2%JZ[nG-UD'uUpWiۮɺ7VnuHȑhLM"2MAIcV[n-[r"$SvlghѺR?.#DTrX)2]X4Iȴ 9rM)"pa$"2MAIc~xnͺ_+hꐐ#¥]ڃhxir[* NXOnG+Z:$E4Iݏ!xoڛZ5A{ZBy@ש {sw'(6jw]+ I[V֩ <*.5}xjfm('4=[г2%oQ04=ujxhf)xN^VMHܜ)WM@7P&(nvҦԘ>3׊45\&FjBfVScW:ԘkɈ:Ԙ՘4|5f֭ƬPM1ת tݮ]֘|5f%;$| 5f5=3X-55c1Lë1k MjWLJYcJ{MVMT^YcqScnj̵jbjW Ԙs@Wc֤1755Arup5fMM1ש cTsScUbWc:{ s֍vXc4jjMVMH^M1ש Jrx5ffScnj/i(SC45~]frFN(Nn3u;`kƖLwEzճFYfL<3һ̆n3>kܙMמ̶>EF'UV8> +M uGI3E&‹ 87))`?zn>n'7 ?^o>zTNu.6LdZ)q ٧OHVNP-Z KZ߾N[.x # Fq؏qdz=PLߩCRX@iYe3AU蔿a8^97c[9hxS68(9$, rB@kB,\[VSF#Ls.`#Xg s *A+I\Yc0fܣq>6>v[ $^#gY9RI(P.9'Bqo SAs/ @dpXZHBPX`Z`쉉*`.F3\&; tH$}" Wpx !"sύ / K j@ sd" uGQ%K KhqX! "Qbɩ(΍$(PkG< )FB;. 090`m1jl4xy,0\Q'#묰2cB %Od}şg!f4j¶M' F=ͺdƌC}y?!oZwdh~oO˜4 _ۥ7%^)Kw!XA %"C8+gawjC7gfrs_잃Zʟ1}X3=%En(FTOn)h4p[]4S6lWdlP݁ N2cds JPu.$>9rEZI-AIc2w({ P_C '{C00,ELF@|}W{>x.Tv \KxVd^ :2Sx_~!7lg .?g rRC3nUc k(&d/j`1[ /k̫,'o%骒֚rT#ùPA yƋtϋF"|!"xD9ZN4Dlw2t`͠[kt2 إƸpR"S%s ;8xncXN)C),ʍ"s( T7"~kdz?߳`%N05UkV/|j{aQ8jD5SFJMÐ7+vQl2:D;DH nӯۿSr'mr:~tk]~~v `SzFTY޽98@ށw~of=|_|nGv/@ 79GQə {Q]q~3tvO]Nn/OЖJ\@.*2,4(MPv") ~>?-ncywOLiX &E.2Eq8PGE5*=?e=G4dg9Z_[7yOXfbcRE^N6vw)y=0U_1R+,wh +5[ýnIZZƾ ^_\cזHҊ- XNi!aj&/ |][3X`Zi"CtrOG6\x{2;swJ!ukd@% /λ2BiJWLb r nJ8ʘ;yps"!+$c;#QV. ĮxQ^yvG.;ZKAR,ys@!Ԃ+9sEEH+XDF`V#fGci>*{5 u :^\{""!M1azIYX)(u` 8IV! x-OMĖ  ]𘙚z̅sy5|7zi]rm"sH^u9=$*_ z&5>O=ySoyz12pApSDFӊ]s7þ֛U7} [ӏ 3IZymIugy[gr0T3Lzŋ[W2K:zći#E^+`}6=w]ޘ5}Zgx/%^75=s50zvln`IA_Y1!O!:) )6N/iʇPͻh4^nlESLwN&@E sQN[FG_Fp{S }baհ?@_V>E\Nn^̜,ƟByߴ:L%Rػ>8|>:PiyD9d]G1ge:pf͔:cNj286ra\Y*Uni9a9[piM.)56d JFFpU`OXm]vA8F{eWGR8"ȼX1)JPeyUqKW9RikŰ\g+y. s`A Y(5#_ %T9YAe!Hf\(95̵E$9%S)lÃvټFa*&u~n\cĀb!Nkɋ?hN|n⡠*)P3 DGjP}xGj3g \( W ˖T$)"` nxuL.XO|zm|44}2gc4Q\%N)w땘O^\V.̲CPܲ6=7BA>0 䑻8bd7]?GIJQwf&E%wVUD)bwG z [HI Ҡ2b4OϞcxAʉ V: VEQݱQ\EQ6c f80yl| VeJLGgk%QEM{4f/塟zu-ߍQ_ a,>R.Cb;sM߸ BwN:rq|>Mv KEEEn7dN \pl60s˸C(g"&Ⱥ)t8&j^+0L 6t5P9` y$!YfQonxɎwv+&ivWLN{SiifF/mѾ,Jiej/'84̜ҮY-|8WOXrG~+)H%z٧{^IUKda!${e9NJf!}\ ByZa&kmQQQQF֘CRPnsΌM[,v(<^b%@ӜkVFp듳6T+knG rE)^A |z>飘>rz\cs++sh`cpհgdUY !Ljb.5'5?Zdߕs]ϒ.$z>L!QL!Q9|M@-$-WRrn SDssefCfҡYBq<ߠr ź0nݠXHQ>C8g K9(_Z)= 7z& KǚT@ӤA@$LY&݉ !g<{dEQs@!P 3$XQtQj KZAZ{3֑Lٚd{BkdN؜@P8F屼V=,űgHJEyKTW~Owv^{w2Ceƅ8,D3R]]q-8Va%G`oRoB4]`>WJ) !L⠖JD2un rYVS6bP}}b X^]D]uECJ;_{ ,LǸ ^w:̼ wv$p΍'{௷qr4 &Oa6f6{DX˛`Ƴ_.֌30X*>?@lO`暕MN: +x1 b-KǷ3yD@y'J{b2__|?ǧ10` j6H̺E4'~Y=R/sGZb:e~3tL۪Ose ﴑ/|fֱĬRUio{vMLο}w3XYLq~EeČapuF%-P]aj;35qV΁Ӝ@'K ~;,@YNL. bJCjr1LST&Oj ?'Gtr~n<>†?f>߸:Vt~v y|YQK w/^R+XohpBd?<Al)P4Yh|CbCVvdO3Ռ77˒FO:kںuuK,c&ꨆ_owOɳFMWSm 0{Aj6޾_4{I!8g#GDaTU,_P&Ӗ eL.AĢ[/.?%|y--Q.M%B Ƨ15_Y-4^Gx@&zg  ̧0yRd+JKa)]mrh(\ RmFZCN5$_F{`*_c _'YڷUW28C5@ތfCz9 =m5-·5'B3߳֐_0ӷwS2ߟfa:oCpsJm|l~l|}qt=?bm"prݍ}FaF%.5lѺ'&zs<&RJ~D&؝d3B=qeda~<"=:hm{3~ ulU\D}\NmDQUTsZQdU84 IcmQ}pN- O)btlWj&bO =ϙ,^ $eZ уX XFRQ/2VF\-=5!)s/:9J25gRm £6WE$?ɧ|^X{n\0+ϔӟREE`n> f&NfRSn40.Z_jc mdv.q.-Ƽq@W~`x$RR Dz=hMšWqi~_ڽ"]a2sɟf%*Ս\Dk't*T1kը~z=eRbKsuGH{Y%N;Yh@J4>KPi0r0ٴuzEڡŖR.T$E#/{;pw7ګ¨)w/;h{RQ^ǩE yot:׶ iٻ8ĒL ) h!Mk_V/f >ҟWAǍ`m6\q&uT Q:ZD.()<1)"Nox`2}) At*֠(E]E物 JZ%kmq."dQ|d*c5[U4VkYR@R"2$c3C01Je6[r0˵5Z.U>TftW_ BRHwʮ"ݱ '&5u!RyG.qΝ-q{: ҳ"[雔z*g sew'nIj1%"ZQΨhux#v(rXFn[t^nGT1iyKJ>3P a^gK껊D5Hi{)+ה9Ho1h=HsHXL,HܸLYÔ#z4ZaZoExswե;lIdi;cUs&th$Z ;K^s){0/E>r۳t%vN";V# u\PRh1 QZLU cafS/I, &R6VV`Y:[MY%#l_ɳ sy%|q)5S#}-S'BN7 9e9{o?5XTn}0VWԿ0Jyg ]ӓpqCim# 9Qj?-c2} ъPgBh h J\4d`ŬN" WW'X9ʫom2RJ Aݮt3R{)+@HTVi`M;e|ñIBp$A;#ZHj ))QuQV7`mL5aQv(kੵVUӽh%u5MycSMylͱ)XMyoq?\gy?%%ЭV_-.IX>d%GtA̝v\2ϞjIHZI={IMdGsb'Iqm)zmo ?_j]M-.8_Q@<#[yvC[ݞ|¯9Wht>-9(k0-yaJ0r :V zA Ir[Z!yW ?vZPȢv '壃IS ]mvOzTJX uj0eFC=i)5P*wR8J( 2D)KrM#Y HC5ԶPM9"7h2MiOB_nEu ClR܃`ƀr-n4R%ɻ cK`|TĈ\_KWOk!I3fJ"]ևL u]]lrtM2Ԏ7pC ޿˞~Z#*U'XI[kTftboW|C/cCЇXMdG?6 yv~_GRpb:ԤZΓ+P;+n08P Q۩˥6]8GT ZzcTzV A աA:11— g L0{)&;qkĨ6cf`6i+,.z~,! }Q +7ПgLwE}>2ZJ+RHOf˧>KOEp1P$f0p`Z^fT:$+'(Lx DKvde=WŘw=sZP෿ sixE9vRބ .O @A%_PKhݬE3c@u8Q]ע*HZX420>XPWFR7XwY;uȏEW7c2fQ¿~f}Vw<'Y̒m)|z nhs O[ShSmp:_-vCdϮ^p{}za뒐u'B!\ڬr-&/j.7mCnĥ.FJN=F,+|Hv873=emL2;<{{vUo9?8z*i9Y;Y B:'}}2O<>QB}tkf IžZykh\wٮ̧OR\mxUto=kvzp],npJfa5@gsa5!:$<>.±o1K/ 1QnN{pci9{.u8)6&{gp(6W':z;6jY# Ew]CWsX0ז fL},кڍ#1eXtyʔN!IWDl 8F;shC_t hƐڣc(HeUoXp<2V,_t>KQ#oaA&rdЏayt#jk1zlz*THS{ܞ`i4y}N;Pmq{WtB <"kD2 ;7 nj\^R) kXNrC׀$z(\DP\hf.G#AR^Dgr*R`ҔNia"K^*m6ZR6R+ɐ CThR2;=,&u_,R~JMAio2:'o ekX r9s*`Z 4a^ROOAl J&dwP+@22t UCBQbxp$}Gܶ*mJYǩV;(u:lRЂ7umX\Д2"vƿ kǕ2^z$*'ϚlﴻaziUm  a2F8Xk[^\G#P;`,4ßHvr,R@aЎPQtO{}kaC Ks+uL`֊<1Y X&0`ʋ I ruEI d,Ƃ)hȲH1pr fʪZ A@lFHWzpp{qz2\ [(vPaRsH/A[oIƌj[*0# J2:m%d{֡}ikdYoa͠hk*Kֻׁ 8; p ?*a=唝pք5D 73 O;4n*]WaKc7JXhlHBQ QXguO/-1AJxV2&$ $LXoֵ.ڑ?8d\ٕUQW(c6+)CHx-et@Ign]fZV6 7](r`dDFaF*{)=ۨzG=1bF ̤/,KX˓(qOU:1 UL_c,\ 1la'qɿV*d] y_l5m9v>F̢!cPB,;4k1ѥĠʬKm#6 ʞǸ6ȑR%4Y](%$B)tP8K^[g Ĩ31%,He(G-2{ZΰM̈́)RD$r2.\ZD ǻP $,,]{[Jr% +䢡m+`,ZV&& I ^_VX+8pT)i8:[li+em>+b!`śϯ|ca'X[a^An}AZ qGo0ۑV(Խ]|ob4W\"_'Ž2'U:+!GizaBק;iO ,דٛ,WnLe!QɎw\W%}˥26/[*S:fҎّGHPcm?fKAQ4XF@QBy=H^͚6YR9ޥbTQEtg]P*DhBTWa&OT>:=#4<b9H<- {E$v%tI$\L)%*ؔh>C{DS+%6qFmS=J'c&RqU03WmS= +i2rje&uJ,ģv˘r[ |x;vfx# ȞoT/ziǽY< {7 _x⵺jGQgS>A0!-$8M>ir5=n\rۼl_irVG#Î4^yoP N0ESpd5=ӞZI^ f(rlSZ`M b49{nV N&XHsnEbmTjE?5>R䧳7,9pʵYc-lmRQVݠ{}jyQVkggg_;fa 5JMSfi]+Q{P 0_WSIz?^}vYIѿ W始 2*>\߿qӋ]TWb]dLO=-d>ES3+wZZIy*=ryѴ~,K_Gg#O=1`>Ks>U 4X:kZ{9.0*otQ.]>EָczhЃjwDAun<Ds*  -*WE$()8 (TdYlCֵU_nPm>|*( ɢmY9m]^o5/X[0,y ?g^̭m :xè-inbZvS]Y/o+i3ruN^M@ϫYM\zMEoY9גI\D{Ȕ]څd ye}A2:z\&ϻ$Z&UC w5_12Ɂn~)@,="ʖ-ɶ=^RlX"]{|='X4 ~ΰV8g31 b׀,gsO%rsn$wݲ$h&' Nц1WH`,:^q,꜂R4Gڎ7 ?οͫ¿ŎoVlr긿-6˗be>3_vK.e++ ;!I*"Cwpo%Y$U Dl^&0">R)OgK".AUBk[UKSQJPt8}X,ׯ՘ vLƄ3{E,.NAǦr/"%#fՑ/ExVԋUwgbpV߬htwP9%ǡFuerR:!II#˪.~oՐb6-> q9-%M^BΗeot钪JAP3Tm+k}L=(/+gp'֘?^Sx^y50G?:Aۄv[K+zbPX*Ϟ5'D8NZT. eÛ9LăYkL`2qҤ匄woSWs@:9Y>4 ȘhZ %qgh.tlpP|=Q[9ͼ:q=|lݳ&>`W*mRZZBUXA[R.8]r*ߍ?<5y86uSB_vi%x6qͬ^UhLN};MPZ2"{Ya#pŀs0jqM-VVx^׻~Wçw~TBh[!7$NfӃ[37b"_4ZJ}ayǍė&L +e |(asooэ&t=W[>Ǡ!p1Hvlï}&ol F1)wwu#h⾱\^C?GH&(=>;ARZ EY-%2[k4F'+x!7\!.<-#*ƼMfBŨ&GWSSf(;4ٯC=h B>DkWI2V5N[(6JWiN?fR K `,V2J-J5>D >7υTsvIiZ$5TM֜1ńQz5]sƛv;^NB*k6A y1bɗ;aUa=X#PDLP`0:>~![hA8C "mCtGD-A %2FHQ0`0B{adڸ3OS4[:[dȷ*1ݬV @UIj5)CEeLY=ݭ"n:&kư.![͊|H>!ԏW1$= 6bŸYV(d Yc j>Es'R tb6m0vТ euZx9US:iX͜jE pA+xmmVRV3$Cxe0ʖgVpkT .{ YXbЉ5w xCo7S<]u+q) RUqq(J*щ5^96voV?uNPYSm[&rm:<KŝSbP\&)Gm&拋!* ŝPp d!2VڭW^R343WO$K9%&LHZtaW+K${Q` ` YaMUR[e@P_+nmBt>qkP:%J'~]Iߎ 9u[m/n._0:F`Y5w;/gˋZo~DӖ~3_m&NPo?۟󗳺p1jj1xw`6?;~%R1q| 1UNL=Jtݴ 8Ev;/MEXkDeJk-U?yX a­!Yp2lD7Ҕ( YݢrKgY֋O6̐3yfARXA[i=9Hlza4r?sz&)ޔ>#*Ζ4F qQH+n.j\n /"ufMnmc}2ba;wI*NkT*/%qLl6Z-+/w3t,zйX+oJlm+<S_?ƹs*K\d)z|ITxϣxzq5R`NNUBԚJL+ߦϫ8d-gw O~F*V=dG>qE-c(SkAPbʚ 4/m{;[g6ЂuVgX⌅I9Nt\(+vW2):'})&R7\l 9S%G4=)Ajj[?OJNv2&> mKs1H&8NĞ:_ K\g TjHY",1|UY3텐$[)) &@s2(yCϢF;},ݿY'Mޑ{iM#8`-0e89ɕ+1yG;%P\I]Q7rHfOIC%~;Fhgm͏:7\;5 ݝ+Xt \;•R;S%ct >}wXoV]Yn9Ð 떧^qID p@jWS&h~,sTU]u_] 5Ne%%E1&u+MP-J4zk˼ij9;$WZ;&YJAi˘|"{'s-sƩƲRq蓸WZt1cW^xĘC,ZE4*'y]s7HV6EYBYtbC̰l~k0o诔O" s]^@)%Pc]):yY"rsyWˌB6Wci*S uDv! U1.Pm+:Gv@ j8z[%[c/zH Bkg(\MD&"Q$g R7.g qP8&*吩B[U:[ZE cxqLOs!%2aYC5\SN<$j~E:-I^z j<}R1MA "R#c qqɤ֕[m c*nDp8ek 8hm4gN/lց% AzៜG9DlчvL[4C9Qd޷70O3%<޵6q,kzs$Tשص5WPE " B+.j|8h=ݸ8[J4RJ RsK ^U{ooӈR8=]^g[#'hpZ0]"1 RYŢqj@ AI`:fl=p|͹"(tiCLXZ@}'ؐLBяQ7^ )c%YpZޭԥ5;k~bzr[W+ʥ22bIa]FI Ń,o%Xj}* /*è9>ʴD3vr-,O09՝kyɵׇ\}3"L/8 V:cV۴ qnF `zSAdX7@=FtMw'<=`No!}6~o=c<`E4pFK#nyvs -CPvMiNR PBFטJ=muxyW+< J.O1+NTuV8b~ӻ/1fIcm7.a_%SR}O~㋚+NG`w7zVټt6q!wXr<ϝ舕3~iJ"QňV̄ &^sa6pʓkΦ 5zٴz%f=k&lrBY1lu \ݪVVÁ9q_H˭;粓9d O4,J,rb Y:\s(!1)TZ0aalEr$6Hb%|6mYPk?圖۽ٝŌSѩV`z/׶6׺kpTb)Q0i#Yf,.גnٯ2ݟ.ͦĄ *ʐN= s$Z(޽r0[DH$XkI>L@H6KH6KH6KH6+#*n"̅lCCČ.ҙ)7^l yE9f^L8Ƽej]:PM՝O .YX/N5s3zS|_tFKV3[\5hxj gٷNA?W-uw?MӪuY0'nebnk>8)L/flөw1SO?Ǔ;늻\L$O^}F]?HI!ɟFƅL!ɱr]/Ul_~Hmcpnϥ+Ow`x8]v! V|;&B8pڜ@x(0{xQs?9k`C{g'_aF^z;<bϳtGb< SS1 ?^5;3閭Oiq:+&x,mÏ@g΋d7NO7F`>O@egֽ{ͭfPXbKM=P;k:36ϗq3NTB;bIW+:2{A>#'UXO~$H S2(ʴU(S \ gFpt9AkZhk\6e 0D׬JYT֬S/y J^2?=O3lxkz]_}6{x\~ְ_{zPvnm~VEjs,IT8OT8/ .s,!= ހ @48B1"xK qϢX ҈*s,Mjı4mv0|RK\J\^.=WιD( h΍A#o_x CQY>)WjMjMm%&?a^ LD+-3H0`Q!HJfR[6g^HgV$ÊdH܊dX + R a)eE5<9հ;;%)K>R|#= p$YF/qmm6Q F"$0RXD1xYzd{Wp %HXT_A$\A{|80XɝpNf-%Z4+ӢO){]S#TeMlLE5^ ~?iJb~hu:5[Jc8iF , J!8"mZNWVZ_݆(wq:gȎ"Gk3i T2 Dei/kwp4ql V\ѨV !\sAC\-\3"z^7OuzT7Ouzrr qdR+F""gR*Ky g!d]6(خi C?@]s<#5e6ADk1\PL(qyĖp,2(O=oGpJa It>aIF-ٰf +J6C':z",p=Cz+Y)(q,4dEho9V}VϊY>}INg~~~~2} GQ& _/e_=UF>٣aOV=c#6Ñ PȬ>DYБ oT^=KN*32>TVi ho+"n6}d/)KZYrhVyMZtQAnx%/5ZJZWYZ5f<[fe̡R\|}UoRiJ3:  K̩DYsaV0hJj 5rNhX(ۢxG1-{ep 0G9V"4(;uC)}N[|}`D.&mDbr} EʇV k){޼-yEJwMv꡿輅^ES="eכ.R4+b8YƔLu-R~⺛txm9"S˛+S5EfwuCOQ3Td x֬@ձfa`&7jmKL<a Gr# 1+{Sϸ-g?wwK͞x?>;/O \)W6AD"AåͣIihƤ7LU4X7 ΚWW4H"JSA9#z\Br{!4P.ui^@;bPRt@ +J`!0H@# 81VI6DI>*6svf_ M?umMOi$ǝ?gX?yUps> N@nwɠ}/_3h_oL:r+3o=p?^޼K34\-N"~xI͍=58">An۩gJj.A씭.EY/f'iMc"/8pF!xxas[e),V9U8m .)?NQ2v)ATxB9ZjALV;N=Ӵ@x6j X4Sct;A4xC0Q!4^{Y_ cpLo3@= D j^F*Qԧw=|f mK&{'4Ou ckiEz5EAB (z 6U!L@ T=N'({o: ̄({?΍mZ \JV] 0ra`qz4vN2ZASFbK!'sS% jp@m)d(.Ztt,|ܘ5E1d}/T1SNj4ׯ+ŔdSFR7 nLoBIi:,!n@)m ?D00h&H6?F mz^"i#ܲ)X)1WV8gZ^QYcv,a ?ks_*ʵ`h1%r+٘~f/ ʇf]ͥ1CtjL1ZTXsK# D-/LNONO\tl&Gڶ %-8 C> K#x81G2i7fͻmpd]!Ġm{!2sMX:rz;y۩gԨl_'o4R?L%9qE 7]78;LkukVB$n}C&꣈lY@.H )fILq, ^r*{,dh\wpK7"3#zQDNr'i|};I|6#OHЖ8B$x'϶Ͳ ]|ͧnHvьݫR]gr0> I hA֕∌bFD$d͜w!:3- DB0 x4F~D~d#<.&ZO( "4q8ut۬u uVaqeHOlQNpٟ8;HmGs|niDA6( w-˻lp"CDz/&˲E'~GRb<Ľ3CT Ͷ4:< |F4.8$0TXFn"Nl5C=d-@)}"?v~Fiox%#t$nT?F)H|fkHD{C)P傃4l(-]&ﶏvF/}8paSlQq&Qq ]~a_1Q|,xzrGS`ڀig=^"ikoE1Gsw"Dm֚shE%>VOpo7\4NFS~<|~ @qgoߛ+˓kq\'sx1"SrxycoL|$3"u<92^ml\0J՗hx77frt-c57!FdrQgݿ6JaHe*apMEiC"8XJz'mA{w>^$ M~M \ΙA7ZiHiKi!pA) q pXL.|(iE  ~asۤYE}5dKtncl*z/{ePN}SB9>:_f E%Uh 9ns 5:,H~)8cd1Ԝ& 0KD~/t .ci^ ݟ_ȌFhHyNm3D %G/sa18"vk1Ͱه!{3JZU +*:8hT!GͽanNt! EiÇ_)׬5r/ƃ9-M1{nIܦan!_ $SpaSX 8)IC ^ 5Iө]=^EHreSG]m cE:kEy~nwHt)-[{bο i7 qA,2nM.}] x;Lc;iDXi* @S1V+hqپTan JkSj&dl9a[l^۩gB4H!ĕOsӘtQ3&w#U\C9IK ;0;6kW'0َ͜=~8aZ@6nLv0-kiTBk&h^mVh7G&P1P,(~.Kѩx\$g>VQy۩gx:*̀zj!*t<54V(S6Hkd=e*Bv/p6(@+Nز m*3-a&l9Ay[?/v7ĕ;NݟqY16u+Ѷ|LeTw+QtY]-YkzH J-Cb+QvEb-|p[e neK^Qq-<.CZ6d&c Jסk$my) Q\K0P0.^^֯D,ܾ;j`|kPִ򠱢+<FL T͏hp!kQ?Pm!tC !^N8fj1` #]V8l0!'Khޫ%43[c=l)=IJ ]n[zϺON< "0fjq][,?i!ؽ9KNq1J-6 k@gi!Nѹ[~H3/ߚ;N|ڌ37_s.!כ&uiӯNx3 ꫒? c7_څ7ohiG0Gwo^[im%ߴ/__@q#Y/8_ C] g$@ii"K3Y~u(9d&ر,~_wUuuU5lX?8h(E%&RIa 11wPA7;㌧߶:Vٌ/v?/^E ›3ѷkh 5XüOt40ePDp M#D: DSH 8۩veTj}]">!vywgCx C edRBIRER(ǶG'P,N}D!)BG .d5TWvy/pQ<Ϝ+4M!6S/3ի^2Ů(S_aX2R]k֢{NDUS 7;G.[ Vzå` OKNH,)pL[kGA';E)e*I81ȄD 2 F Yb<3JD,sr!5VrQo8618RjIQLa8Ŕ:MRc5L MrsxS|1t59Rг;),뺭>:a8?JچF-a/Gz‚5¹Ѓx=\HhNsmD%'AChBybؘLa{Юq'^P3Ҷ?bHb3RhlopFT@)<#=I?'+jSpQdFj&TD Ș؛3cWa&`kpUwҊ6\ 43 ۇ0*?Ihg`)&pO !LylwNc6 gkjl7闅^Ko&z7ʲ&Vy~{$p2`V⩉/lKI_`׫-A4K~(2EvbdD\޺KZ"?9jVc>~}dKaR% P&$dtcF?y#Of2&6A>ی/.P]]/Գ-AҴ6H`'VW^D/k/Ǐι9\~ @(҉$Laoxr~^+ʈJo8?ppLŎ81(>^deLy ,f1g~̕hJJ+n_ @c.N4(xlK)Af/-%dԁ%D] %Ip?s"QL"1BF)G3Je_wYfOΖ s8 i !EFLБxs924BF,N|d fl%fE$n̜xFo4"]3*RAJ(`&HCۻ,"tvͼ/9*\ƾH%P bc*m!^[] EάdzD9-hj~EUUXxěػ*ƻ7ײ!$k>J.:ԟ;gpg OusY(6DM^CHg7lPC4]tnl؍NCFLBDwSL Leb P4 ah7 !_jC) ,VeXFiQIDd3a$aFDpU̝ P>Tjه_??Q[ 5jh5ja^Cб (L LX")Fx4z5gS?eO)[5r^RR⽪*/۲}\bH.UiG[o}+^"Ĕj!G22~I2^?x Tԏ0FnbbtVD5ğJUUp*bneTJizve싛 3 J85yK]3'ztL8NԜ ഇ 9 ."&<Z JH0E5_/9BxTє,U]^Lh7X)Gs-n}1%7>FƗ ߇zweCK$Ys8D'ksT&5,`B$ADCR"`qvMnse苓2nۉETou]:smyح,đEuC"\t; *`h%9VƻhEWG{cS֜ (9GuXhF8{c:8("@z(p y#JdћI×Iwվz:>0 K K+f1 O 췱)%)ۋ!U/fPM^ZY{lӡ늈:HhK4y5HƋ4 ""4B7 bEǻZރ)XrxoM^?dv3ZC ,Ybi:B2{B븃xETrp,'pr^a+~f9,}BVvY[7 "KZD'v !Cj{ҖyI+e‚^EA^JQ Vsbe5WM0$f#q%'F3N@ӎEVJ콲C55#h3_=Ká$(uvG D}#ޟNgb˵ 7 Q1$e@Sīn3u >?{Y)SI?WlL+\xlqyp"}]ZE`q06)I4ЉԁDiRdb >y0X:$nG %.BI[S]q 7{̎N^ 쑜tg Iۓ)Yt F:}_t!%fNFmȿw[..RُВq!z*.5Ḧt֮LoVGW6{Mԥp@v[L]Sks`[~v4K;4 y4v̕QGlswnIllutItRg;27{oZjm#0nF"ygS)'bFHcn;|2ۇ%.F^q} ĴuZ*K-lQ]@-ྒྷ^/‚{o.w*-%BSNJoAVM۝n`K=TI]+&DB(SQD2AUSN L36#\L|Lh Gʼns~iSg5 .#sM_ܦe{qqs^F}:!JpݨtoaIHMih\3oQ7{%`?I.!2݈ӥo RByէz8Y*֔oq |*{q[}tꭜB;JPTWuj͞-Hl'gGe^du^:Ȁ+W0 42d &4Vvs$0Ihn!08D'Ukp;<~v+5^zN ʳԣ}fJ8,5"W&9x4Պ;1l<}<<҄m3Q e!pqS)̕<:Oe4N'9̏ƿì]X`4O^f0{/هneLJL=?QIF&d_8 x>ģx X,X,Mr"A4K~(م{g~s[sdYthX4 6,|q=&fo)r:?n~NL(gR< a8f&$s߁)OְHˁ\Ni6]ikHkV= hl_VQ<L~9~D/ xhbN o L%Z Đ:!zCJ9 fx1< D1T$(iD)L>(rjDȈ$&"4z}W?gJ{8_!e،!@,mya#N_6 Ғ[=$%65:8]Tw]]yj<G ԫyc:z 1 {R|/۴7so+w^avHDYt{AL0A0!RCXq,%\Pb{f48lVHc5N0Ǜ+^ɓGՎ*͏\IiQ߃=v%ָ-%xb$Ƃm :^fuxkRfݍnVUc8ߏŀϷiY1`cGQ+&<~hhfA#"f88>X+Waғ&fKoWb]~jH?|dL/U/n3[hmBVVR:Jfs+9~m>b StjqVJz+ͧK!I'lX5$֜.#K AnDZ @\ـJެogs}OVFi(µ[ټ8К4F v:cKeu,>o},:>jQr|TY8bK±$;]l{av2X2excӿ:ݙk+Zc×LGmݷ^˛hM7R|EUFT&Pk{l2v#,)oÐ3Ie{NALvZgOms.TuFFNS֞%ItZV-b#7o myL!cY-f0 ,QB>V!i.@JQfbpsB˩l8;t0ri#6AijaD&l`)hT+W%K2@霱b]=~-c !hKE!5!XMYA*H)Z[ôXBjf^gpV}^/S.1 B3ia- +ʼFXGx2TDn+H* T3W|9Ʉ+eG/گ:R-R# D&t;( ӱY bE( ?g~ -/3ג<>pE.OLFt8 E "2 _~|d:_O}gſBſ;f2K;s~H N%G|zwsZ`68 (+lQVc{ G֣{Fך>cRzZ $J4#k%|XԖ^s7iR j.=.KJT.D*FRj[L?f[vzR$|_˩q52 IAu0\6L8p\L wfzhλ~{|};5Cxn)-T<״|+2 X7I:Z^cUx; 6ڇU(*A7fd>QĿMǿ|L.t0WRV&4omX>1L{f2~݄Ѵ]g)5&PT/0tR(^Ɵr'zp+ib~SdzV $\0d]k/R筻Lgn.SD:ųil)wH.V]ګ-z|bS?\%㧻_ɯ%T|PewIb,/L4ݍoctk}η99ѣ¦lr=t̂Ort'^3k1{ f[fX(#~1 1f"ϱ7\u5"LyGazWBXo~2UŴ0Kѡ Eo 6-M?R_Ͻgsq?Y r doeΒ1hK$g^0Tp-7t`YNZA]ZwM.utڬe_> C\Kq#.u^C֙lفD6(zWf-@t˚}u؍} OjQHͰdkGR*"@Kq !X*LOEaVJoj0̗u1I9?*Q/iKI!)&G!IԒ?~JjXJȯ7VUG|+\u鷂mWG\wf+'<~MZ'~ї3u9KKLw&.c\< M?ȋPۅwތop{@Z_V{rԓxTUxRršj'[j ^S(Պ-NNwbP sϞ\)ڶn -V?a7_XNBt* YwW/#0Q(VGshe ^nM&J#W6gRZ "3KBnfϺ|嵗εW@Y'aEY Z'|uM>PvCWu|7ޕz='M-}C:I+0I}ァĉ{:݄_fV>'Lh%XK;'~pTQ}᧴cD'鳝\f}xII$N@L3O2*WFzMy[j-vJI*ZM(TݣD驔jw 9K$*%n#/χ8?Dv3޵#_yD/.9wW%;u D֘c%OH(活~屲B4S(@=hf\K)\UH`­].eRZ:r(4CnVA|EM,DbM,D5SPZEp 5?yqFPAoÛ~~a4I({?5 hrc&{&. dpuӄk<^H>@7NK龼jw.g]#gSY:q\[Û0xV(:7^Y4OcBnXaϒX2JUUcNe񐩠r _ `ѥKxݞ}=G>6) bL!5!^@wj 9G0Gʁ@F/Frz&.fʪ5}VL4_z`C%^J'w1bfW,{HVb[gIXVpcz2_>#b [4䢹wW2SNR!0r0<1f w8yO3~sn4Zcgg67kۡ\Ó8ʐL"8Dҽ_#3xdf GfʅcND[9H!oH=y ܯ WRxꂝ,w&ߞ rfy:'|Z:Ƥ5K[gӢ$iݒa1YOQ7 L@!A%czH_!؝%Wv7y]ҐTwY"YXUglBhIH5fm^Dbp[]%`Bay ߥ_=}oNT9Z@ 8$gӽ'e^]̀E)eɴj^Oy${2BH^DqK@dkDe=[.K+AETppQDZЃ^+Va)땆n =dv؉^Έ^CݲZg;%iiB<5Yo<$+ OO!&)nA-[ˠ ݠEjx @lxfa\-'|- 5k"ƒoflEfKQVx=ޙjA։'7zzΠ-~$It.YEr;,Ow]fRE>+V!DԤn9@#b+ciBZOiAD%EtLcή'c3g JG RFCO$wwDL!!"C% &r0Y&EOM-K;9;!&TЁ ᐯodRZĢp{wBhda  8ФfЩt4#֑yR GALj̺CfJrUA"hô4XҶ4pÁ1[iPRkMKJlFa`FCd%%5krB ƈa6] .]}2*cM=d,7C2`OЈ"h‰U1PQFƌҴ!QIZTۚ5CRFU r1$ &R1B/镂C c-7G勍$w-I@kZi3ҦIEu@%ۚ5IͲ[iʭAfXk<$ۙ]Yz,y )2d7UA0Zp*@"n2Qg<_E[eѐ%ڊ=uk|6cYqU6ӝL*ۥ2IVjzu6kn@J-_Wj i,e{H8 Idd^a[hAN4Վ#S%K1{ `uTF81TF/]CFD6x5Aa n[傢!pik%fX/].|}ZMnܩɝw"]͗׮BVȵ ɍ~3ݼhʼpsRrv_6egތf'k ;+4V2eE姫tT澲<}i1rU.Kw ?+ ?z"_r`w*7@n<0|xZx0#j钩j[2=|w`Nfmِ~wek~պܟH jS+Um#IbX0dnƮ|,LE,qғ󘆑̌h|4ysFgQpZ=oϞ5pmۈ%Γ37Ő?6ɔPIVpoGtY,oV: C_JU7O!;iHmߌ;)+OY|ʊSUV-Ix[ǭt{RdPNܻT8~u+!). CRNY57T̕c5}d%mN ;$ }tqR' R _AM*W6ZZ^D(!)JH+%AxR`:VP0@yr;lmhv.횶{m5 $Vd& JBb410z g\|L;c .{9֖3& `5h Th ՅwKltU0E2V  Ж.@ {rmWevˎE$ ]r =B`PiIYH\jV ۾p JQjOd`쩢 )ʵ 9AXq)fNO\vkc8V=<4PT'8sĴ,=S{=M|mrfyo pXA_YQ QɓҸ>ݱњpɖ ɖn6|+PX}w=6MRuHvr_@ Tv҆LT&]>t42sc_JtJEOjzuԺ9ʚPĪ`Z#1!F t楩ľL.ތg7_ʯ]}p/*-7䀽}7{}y_:kFճLO-(Ёl%6MF0e}߭5¥:enku- \UaudRZLW VC)HUΓOBy|u#~Dvu(PfI K= !5hOb`j5d)#kp7`2@:P cż GV&M%dڋ(,=B}7( |\}r_4~#Nla2D^̞?],.|kqH=s)"ƞL}}'VՃhO+gc{都6E [hjg6#BP_PG];G '>j?rfR=֬+ fFGuI*ݗq pCYg j͙3GT*#0b/Ja hfiӿlA;%A+~EZ0OjN&r!mXD_oX'[kNxBE|`=cE.1&Zd'L(3P%iē&oϋ5ȦKk{Nu';.v/a}3řj.?{6Og+we'`l Nc^;CJ[d˱ L*Ū(9 2|Fo"`1D_=DլF| q˹W?썐W CWc7W=:yc ^e9Kyi|!kяq^Ft:GBs BEG'" H^Ӧۣ"M KϘ߂fq: 'p: 'IJ ^k#9-r KF1B* 4g+BILt| ĉZ$Nb JFŋ1WTB9 s種8b̤( )X48 ,* lMԤԉ8u5b,Eb-?|)fكmѾMȂznur_NDxG%**^X¯Շ$2N\Yƙ uR%tQcNlS]?6ir;v'Wu 暌=/3˴ʋ,rmF Z$Y5…X5 khMU%;nfuE!^T`Wز<1%U)xA8bYUNYT#r8^f-s;,)ᔮ ŴK6jB7A"ZE+&m>a^b#-X!Yh]%5] uṷ-P3PJU}y^{ܻ D~Bt)JC=m_,߫AQe e'&ZffaG{F*'%T.'t@gO&9q`]ƅÎg9ys)(W;CQ- 'U %q:I`v@ޔP, zڜQRr9THbʖO9lԸhe:Y/Zy Ǭ筽nk[vqLbao8~wO'R! Z'hz }n`Dd.#o͖_g`g]x 9!e_[Qg(Drux1'3BZR VkIK mZ`N$YOKÐAd{Z$T vѲ,.:{cvӈ <򋃿 nRqNűY TrzK /)_:O^=ſ|왚VHDvh\ƩdB$0$XZr5ޘv},o? +n&7b7q7_q ayIqf:ZKXS֚IkXr5p̾NDRR7 7:%^r) b$2kJeLA2sȝ\@W`V(K$ZL( E 14 $%( =Xr2 q[7Z%ixZ),ڗq!sQaF4s[+Ԯ-kfw_Jƶz嫽 K F{ڣ6 )hgm^lK=zs5ǹ~1YRE"d;},|?̀~ ]y=}Ec)?o8+z]g?>߹cdn3)l3H%v3Js Ƙ!$^^1U=@SuՅ; (`otcY2Tjp"/hNhˊpSpUS'J:3 d=m8*}Aq E~!=??Ono7&S@K'= ƽQKXiw۴J Pz* :u= OL5``cCΐ}qR:XAx˧oLJ;ѣq?;S;|z~if8~2Yc-\uPݍ%?'8JɺhMkk?Ǽ%m3$y;6̚{ &gLL;lZL|>Q9&iɑ5L^4%(~,keH%we\X[Wt=jnR&,URP!^ G"[JA|E %(= \_Y޿:\ur$QcLH=¡"PCB$f`Ć|\(̴.)rtk vX-&Rv-EJ{$)>)bW`AL=E9(9'P4V/Ps<7y臢Q.rX&qT(LC$ RƓNU`/\E s T֠96PDcUX[hoɂ0BvK4nBZ׆ys+9q;S^A0]>HxziBR9]VP{8,R ~^~ϭwrʨ-30M]\|0*.\[}vK^ T+^k_Au-ٕ n/'L@TW7u;bH8Jz$)f\ؑ-y/ߛ٧&Btl,]A JhY@v눒(pp/eK(o,΃9Lq# ,fh'PCCoۖ+b$TݵDH!( [7DqXܗ0nP'{njj/wץ,_Z{(yhRiB7Mlco= TU 4dC?Y1xz=z KRx޾;P[랧kZl0vQ6I0RW? -xK7Y+)w懘s*/% 9sM"{oh7Ab#:s4nG4ф L%hvkCB\D_.S޷;beeǯId2AJ_Z%82"q1U~SUPu%"(&{<9m`=5$ȽkZv8qnV VArZKUMp~~ $¨B4Ђ31vy**^pLMb\O_oԯfj>Ջ_>HR,ALOކfB\ sG@'4qTs0Z:?f^'+%Lnf)`3Z# ݺ*,|glMf{zc_/9Vk/y@E>MsrO(_oh^@wLɃ[nuV_@P{lkr^>d(wӬqfǙY(B!i DL9+ڇaMi*lR߄x5W;:&Q`V.ywxWp.*;Zu`rLZ$4(;y -c2|R/dB1z9ŸeIYUnxIGBYW?Bl@w7jpG^b`C<АxR46R@E20j)Xns R3)m4h&eԶl#I uzꍺ֍x0⢥399tROV#ڱIw ڞC;I|1laO `, ?(}ز.%*P%&Hpc a4! {ࣰ7[ ˼u2km1M8&ci*rel.PWFL5\1H8RɱK~ l52J;›FXo"\SB5>Lxd፜..D]ߛuLn5@CHXwP@D(nuR!NbKc Lˌ $*1-u¨q*f=N{IzwrfO< "s128Y#nVn #ʕG̃_ⷝ(ŶcZ8b^4(vq+Lٞ5߁m xomO$\rq.w~v1_P:Dpul}\1yM+ݱq% (>gBv(dBv᨞]X/BZ pT\Cn9`1U椤1Al5.:QR_. #'Q lj䶷5rz6t-em{Ptc8ޑtO`Z+6yI~0dV")\lk>#~O b,<Ót`4JLDhO.2n]Mx ?ܴ{g捺6܊67: [(>o&;`K>~ObNfa{N>nNs].G%"M>SѿÎsC^ T^q/MsXWYQG&egP?W?[:ȽU$ %CrToCpb=|pѺOJ#  eH,Bq~uD=/J1B)Q(0 FR +Qy"ߌ,WB2ƩR9 aj#KJL ~HO(-*8e(fNw=]#g`=^7:4ẅ[5zxm` <˗-q8a-^weqI42!@^3Xxǰᙗ5JRt /vVeUuіb+*8V߿3O=(-7#Vxdd+}4 O&y,zZ 6,ד2-{jR5*ү~YΨn7Q=.kՑդvHPv"~1H0{.",T(k<@&e_~WKzKrv|IUJ^3ZOTQ@O"Bz_>[*PtCnJu JɨFRP*% PO-P aX0{[+d},,孅#<ٯrF.@[5Fp$Zxo6xo~/9- kLaȗ A@7 {Z!6J/Zr/o-D$d]: &"4 HPŎ=QY*Y85 qRp!tɂR4\jBTvYM d(QceI Ý>KI}H&/2 8&PH m3`<|YT㌈m2 < S 2,>j!DH8pxKLKpp`R:u_|N8Iry]JP:)_ nS2/·00ea[20=| s![V кQ&va|Lyݨ[\<][m4Lٳ˚gF Ξ IS_*%J\jtPN{o)$9PdS<LVAb])-BsgHLƺ##ZD^wҴS:̓-^ lk.3) 9֓uuSeJ^h ,E"PQ,9#;X>mjR|}q09MUBJrŻ7 \)$N8.0 o˷u*5-DɧpO;hD%sFCؚeKs,gZHh(cβE(QeK X}R= 3位ͦ㫛&SZ 63@${2P<e()eE_-}5"n.Iz$N RMcJ`P9Vak3g$(( sp#P/W-r!m)X3T,%we [ =R 0A~є!4<``(yYRKNand #LpM6|9rPu0Lq XH 9o` (^`~IM&V '%8:Hn"^" R6I + P ɡr>2pӪ6/ƚ4x2(z;Fȥå4)iפx47bnPd=J lBO@ŽJ}5e*%ZP??sj,v`%E6//KD=1VڃXO^CN8s5kxq"LD뀤 2G鋈T`Q~ڏ~4j@P"w(#E H LAVXK7N`.h*,G0! Wv"rWs]L?Vi; 菓Epcų.#ƥ%ve Nl@ PƋޏ N'͘?n{ÄВ1i<ɥΦLl(kfAѺ;~g+% ?#5\zA>!Д~*m t 9%N]R %>_t1_cZmFiV&^MX4ù[oyYM@";7p5cNNJCSJ dNJ^$$n7A& XK]R)acƲ@D I< >ӄ;-1,Tb:wòa!l[g,bM9Ì/8)ja\"#-bʗtrOIJ\%d麎H3kK=ٳ˘B. n0 Y`tTI[cXB bgKb "gK Qa! mbft`V(bXtNhB7D2<&RL6$( .(M G[8-c!+ m('Zk(5%uBIև= ,u#8UkӇR)lsd(*44W/,n]&yVT2&'aZH|J.րJ)=adP\o.FFLQ (Y"4˥VIg i-RpdHRT $ǷAcս=>LVyE&;ffOn")>L^OqJ0w𻋍ǖ(ؚ2A 7]-FVcB_Ň0>h.-M}+{Ivt<9дrpfՕnE2# hLZ9Ck.r1Hw4ngg|0=TօEL aYge7L^IU 8[ctg9̅K(YۚtaJJgR^KHVx@~蔯ˣ$ Z2yjP8MK3 .I3!g}C| z'i:a|wn):;ƬXX-lDr):|00g8#:7eS<" ՐY:m@,Ďf!m꽳ߘiOV,LCgp\]sƕ=X\}Lal]|cFZN\ߧ&Xߧ >uvHBWXŶVImG εfrKR~M^ Gy8G[{sw6Qj)$U|M$(#cZ^#46#gՃ'!mg: sGhJ"AOE D:hBr3f(EeJ{Qi$%IBiJY,P2M#[4wvY(GG: Jq&;~\mU|T ҉*ڪZ؝a5-`GOvoUMÊ`\ԚƂ_!<,{M2?~ nzyaN-`4rTlOj(d@BNIR\lj`Ǔ5 cK=H˥raQ-ѰsDmO?}6roӊ~.4 L i)^g+_/AM|2%ހYqۤ ܃8iz:TWTL.G` l8ƴLbsf/ߐhnQ{r8G?ْB@(G;(K!FWrWn:+n=1_TɳkP!_KU/K"s- Hi .y"F?nlyM_d(8fV|i]*.-ڲ2nK5EXk'Nk%Xa jml\_ dʜ!: V-waMg"rFvrl@@ljR|}-`ժOûphDB5kG:MJQGҀ~\L\gqb' 0/^wwe@M5JzVg ӝ.]m^pa5(RKIwW2"\nH|lNUaQph0sP\);V94k\Zp `|8ī Q_6Api'myB4ȗE5&k̗ҎTRݷ iMbz_Ʌo)-fPXmʂ!I C(GzP9V~l=P)O< #s0fTu[U-hw*&O&Z Ai;H%)wumQF[uwe"`>X"tْHSrF$Vb'R;FS: SHv(˝L8}ek []SR:M)Sk5TnQ@+@5 IIiDDVZ`33qɤSAqVTҫ4&ygʚȑ_acc2Gpŭ~ٙp{viC֬,{D}LI%"*-:4LTe~mƼ|L3J*@d<`E2cGddh$_ޒx:A wXXpb~`Dn;hUo vUKV#*sn뫭{S$)l*ܽB{]@W@Y{{Ts@۽W K uzN\^xq vT$˳ ʬ\=UUi!Q]*=Z˵#n f;gXֶ2IY H*Zƙ[5&я-4sRX80dA0i%QeyVntZAUw_}!3/\ =gu0dj\ݭ+<>3Bb BkJMa nA" ~g|1O]-}OF_c&;_ߞ͇ߙ $}{_B4RYKk7anJ)DBwJk[P9onUݑشB̹FfU9vUFxYUNiGH[,ͤNQLhAUU,J1MF팧7{~M\[Ȭ;rpJRx$!$D8rMX^ͫ%Iz%$l'Z1KN(HX2%y^q"ԈlŊSPHyn)?#^m)Lg+Р7l S8I1(Ш4'(ͤҵsKV1޴[8ږV:S1SZ\Zjha*wU}$[l}Y>[0kvұWk6l llvA-Z~д;Hcfp.l6A#񒽦[m6ROyc%ԭ7+䌑MKT~#/{s[,ʛ46_~ X֎sd=$5!9]Vv94{sV&wHJw_G=.޿~4=&Vft5}WJ?I_F7F} -ۋPN2u2F:vw8cU䁅%|< }~KmBpȶmŠY ӠPLc}Xk3,uBmCv:ܗg4~%1h+wϾSE0W 􀔸bwciR"WاF ؖb6BaeT 'v|Myr7.uD?xP +Ih!5Ab""d:fQsAudԔ]uIJdeɸdI7Y39,ag4^2h !Ƒ?惡,b.ɷIHE%nBfAE@gQd!ܸ9u}Y $څ|]ķu}'ʹ=@>:`IY2g!:n,c LM^YYxdo7F톗¿\NqABTuRNs*G<eBr%h`%ټ2jp+N[r}t-{Bq.5Zutj+1IRZɮ$Н4|Dȭ'w(L U&uNFIPjTH$ #`wB3,59St'no"JItQLj4#`GaSWb}@>&a .,z2䉐DC-p^UQ=U;٭U[A*!StT S\h+e[ hL!z:RNe7bĔO0b2d6eeO\[A;P/ FMy$3r RhF߰]'5l] >I[h f䓛sLM%$%.=eelOR .hc^>g/:'q!na0sZf\v35$J3閅YԦ*JZ T}x@x*CAͼAWP zW}$^#NlPS ^.a:(9&{qJ@2}rogef/z|/2Sr@_'ϙ2o|Ѱ(KflavfjF{ZPd5C !B̤OL_07uo4^X|֌qb{k\kyEz`Q<}DڇXG8vO KcD\CUuW4ܣ^1n+H-M7WN:9cm0LAR vB{B)ÕEuQuYYi,z+Z-@] jL6thtl+ۗjWm\#g9S; p12,x >)_n- vRyu]'_ -$d+7RoٜZdz_/`il3WDK<xw߹c,nR 9xnRE rN;789uI]ucO':m{m{U!G1%+?`-1'[k7}uܛkcVG# Z5f`TK,C6\8Ƴ:lzpaƮZHLOy^+M9 }(k +*ut*Ԃ`aۡ"`лY(-2-_LlgLsiDEsZzLs9[dz_J!/Ϸ_>NQx՚m*9{ݾC*O~"S^~J)Qc\ԅ3*cpʙ%dٌJ9$ژ%?Wѻ;cGs8Bf26idT)ЕB$$3L+ZꎐY뻑Ϋ8V~ AꗺJ'l4YY?!!LEfe^y b3{Y2`,1'fbiw喤3ߧ NW  sY:O] tx~rxpTE&]z& 39d󡼋h59t1kӞi' kdJ&5G%!h~O$fGTjrDJHQj!mS$3^Dr[$o TR0sx!?/NSo-[ மgzNdD3_寢z^M'W!=siLn'cɺ,&벘b.&kC<s" L R>>GH!cLyp3)GIר@G?M5H29Mw%'w`l/<մ%J$)CbVj6&_DƑnVmg&jCX5$孲:$jgݝNw9@AYe;F+-8EösgppB tV~Ժmֳ՚ 0WG\t!Wç@X*>_>QjwEAU:$ n9B{aq#dCā%>-H7A@E2+Dk JWr lNZ֔oMK}xAZ*pZ*0i B*ϹKRU Ƶw P /S of0%Pp}'TWTr&5umr͓ze~m-FiAL1<泔o -oѿnf_K0uowsXSRS|c1?YǻFN{Jx!&߄#+8^Swǝ;1ROUrgP?ӟ<܌RR,'>_Y '% E@۷5&(aCnNM}imkV^>S!!ȔΔ)}N4jaΗWaK &9ki;cxvKܦF*ڨ!\ʸ]̳*jHWiX6ucY ԺW 78:E)Wõnz _ʲ*r חc<0gQo;1F%1y )7EuՕU}Ngα]ħ5ZjZ z$Lw|| B$>/t Lp*T EŬQDU؁h -X3~ؗ1Vi0߈c𖲸-1`pԫri!W~_)'^}!|#nԛH ExOvƙTeh0VDt8 9שYBQDDGޟjjq;j8pejQp.y<:nFb DcP(rgn*Ҏ*XVcBݢk3FQj*$^*&zgqD"Fo~!NQ\L}]ᒢ &;q9w('V ~7͑\dQWw.^hw 98yMy8iySIL,^O&Pĸ)ZQV[;%`< Zz^ yMTj0&5ow`bu쌞<຾Tf~Og u ~c+ BuBjՅ{ti݊h7(p^"%:qZT>5* %E:7A`?]s W $V 6^!PS={aؼJjv!B: 9 v< rBX1x;8ed'.MJR1+r1i$2ƪ+Bx{@EFvoqe1ʺ+9ph@]N}b 89&-[8|E'/#.Z6~M;aDsO?x%[nb ;CseSt|_g t\ji.>c7 M-^|0oyx2~HͭT1YJhk1qBR4֊Slx\3BD#&v°гԷwoQ l);=k B&8a.zDgAC{ngұr0*cE('+`z*RXP| % !HlP13KPTcKQZi׉|U-P;D0H!߬L8||JP*fLϑ"zOy<'SSpPk7ʌjߍ~53a|$ [}"|z5]ʆ,DtTw"HE\ݍo*.SL(F01""9Vk'%Di4R2ڀ'9o_oFn澐{T:Лw~#b{(G)l>h!u<0`,9z"M1Jt@"u^"E)꼨GuvL{C:IQ*C9c,_K*iRxpI+"G{X8Rҡ8!CkN  W jhErHs"j7SGAQ Z(O 9̃a{ilGm e?dKr,LN[e eTHMJ9#{Ģ \Y"(K^dr.Y`"8 GT") @"4OI-9ɷGxe' ]K"Z jk+.XB`Yd5N(s \Hg0rCcA[%TPd-HgU߮%Y h$.kk4b I$;c"Ն0 lǀf3M#^Ec%%jj!8%JPd"Nb8s{L;OQvi`RkloKL-rdsL ׉00>Yymw](\X hj$+s$v@`8g |d O_oE_{xiPjwthB:u-}mʭ"p3/ #*- T"{Wl0-g6^ɟ x^'G ԯX ߀xԮqf /=l~ͰhMQ?@3^[AZzQj)Zz5korڣb9@ͫ1A.faQAX;{9wᖑW2V 1SuFԼ T Мoz;?@!ʶbRtĽDևEmLF?.u`w؅o|c)SsN!耐M$f>^~Lc5gzx>B½¶,̻@dafq'w1cY9p/1 .-YYH2￧ ==Nf2?CNJQ]<ݻLA8ʰ쏣5fstt w3`=rtcl.=L!8*C}}9xwv3s(Lh}״ζP8t cF(:8Rt n_rzMzYwzBB)t`v-=ϋ;BnPlgBbz׬HjM!>AOgx~SMJ}MNWr}C<κWlǝ x&WW: *%l}${NG *U_oZ&d2̽&ӂX,$=3[mdFwƅ 3ޠb^$==|E:1W|PVFG EQ$e|{V{vݸyXuHCUԒqr7X}XL8sv>4)Tӹ!T'0lZ _g+ ]U%3R.7Q{*mpuw3;Qrz|-_ni=~Kn ~Eƻjb[Fww}xAb}02Ɛ!] ib:heA1vIOМ?Ls> Yg'hh+AGst?d+kuŨ}m`iu\Z{&pxXLzn\_{XL[X3oǼɁ794ш-ޫc0O8uw G 7 :ߕn-9t4ʜ:ٱv$8% N6XmF䢓 #GjssnZk7> &1FԉW$:9ébVa# ňF%:lGS7p:waojV M5s޵6rcigy=$Cn Hf3A]d[R;_e%JbȺ8@;wթ#U2B5IqdĊXfG:MY% t$dRk04?ƪU$K%T%L̢gLbׄQ Gk"A m%JlAGg,HY :XE0x4y-)cZcARoV}ȵMi\+(֌Ij<_tMABweY8gqv?~k`-S>_Xǔ?ⅷ;_V?lSފ(l,c;F5&kavHG;'Bc,npKْ E8pLk4T8~ujc5o7G`G}֐_\9[a xE'Y by\P8IUl.։8hHBRN n%@o_〉ݿis?VJuyE^9,ߠEmQݬ6#_cO)u,dr c壪!s+C8<$>c2~|/:럋h}چ~80h^ƜHO*v=vgٶšG* )ۀVzPaϿ{fz=\cO["5UYێRyDAt1]uVl9۴bђ.:ah-vDSGDPA3Ʈwg nz @n4F&muZORA&.)*?^?.U&AEr8}X_Aust8mg)GܭY2DKo$=`o;~6zF j0EJ_) oP 9Pùi9> p0 p8ݜ5:?~S6>brQΏ[f1/FIVi+V|Ze&f±IHM)YV+BN_CyбI<{KyHI4$l~0nxzŧ] HJ!(Iy$FDRx{}[%d*X>mVr+NRk /Y+ktR:Ǩl=^}[ݧj#2g {ɖ7|&FB_>u"JTpڝQj6fs [otQ+jI Ey^4%HDHmV;ƨK&잴xEw^T"* Z 2{`* )ZB";M_7зYGu+iO$̠KxBw& Svi%^zzK*Y(`|p! MGvQ6*<̱쯇 訃AԦRs߁[E[(REoQ<И@&95[r.g2 -|Rɕhg04ٺuPQQW@w6_6S;OH4c4|iNLFg'\koBTGs #HIca5"GFQD 'Hv5ͱ-68J;ӿEux5MS 4!fCT14AXhL"FàN Ċ OVdvH1mA^!!ϖa Z1[Hg'OۻG[;SI]geZـQMTߟg{Cj{Լcd)zZXQr>>dM-a]ҡZ9#9 Zw>,!~):yh L#pˉ'<^a@aI衘 i u!lTˤ8hw%rEGxDW6yh'=QH}G'?gkrA˩X. o# UKexoJY\A_Z-2 ̹4Ohd(|fˍQ6f#JcCuPs mp%|sJR;"Zαaͯ>;3,x}Vb?WT1i%30\2QWw: V?:$['/;?o[U:ss6$MXA\ n{I^{PmHȱyE%9c+z)VHÄsuioZRzP"NqB:;*%h)4nнފw!Ga, 3fz(ĵ'A*VU.ۻaQp/@KcC1"q>;qEi etc0fKD=+%Rl^PI̬ź,JČ_bۿ}M߿E$)!?%mXmwpHˈ .`HSP /qGP%6i"uhɆZonu_a0$۫'+fjq TI::}LTN.w e `[3x\ppNi軴>fDt tJBP7׿_Qx. Yb 'a)JvQ-f'ZߧEX7}}-RTʿںs+d ]QR=nZۃN%pe 5b>|t^Dd~E渋8"G7-8sXE^ B~ZdO\nѼ@h*_,iAH :蠲؉J̘lӪM 8C! r&Tu$wzJ3CAY.@MMޚ3萄 mwLV_PoBaD%R^(M9G CZLjE1VZĄqȔw+*q85( CaEylS5!AntрQn .N(e@!E<+Bc[Е/)I4E*(Ya0V' ̔. 'Vhu D:5SL,z1îvx \%͢M ǂL^-xqd{#OYCQfb)mGb~åttQ]`Nz h:dOʊƔظ 7xQ|RK:'׌|j{HBjL0QD(C|,z2~v88ؗO3EsL<@('=|Ky ᒯڲwZϛ_-PK#2{|n\`΍ F"JHúAL)5P1DA`Q[H=Xt.005d*Eo7PHF@Bc:2<:Iawt,pFB;K[u\3|ߺ::,B\x24rDvh|yǮCcym~vΐ3xuգؾmyjIDc:Cܬm de, zWKټR,9y!5뿊5x 't2sUګ;+j1zCX 8ьQܜ Qb4Yׅdˍtv16P"yG&~:׌X{n6̡( RHziƼfaBetR3P4Ar|uu˄Iz5as+55zFRZNZ[QΝ !al6$Ӯ~a5.5'ڛh _>6)#ruqE!Yda}Bކ5p=E+F/7sL.3mnt9?XRDvM -6(SѣY9Sc/I4O6-3[R\E[.Kmo8ȩT<̥F2ČfG\'*(!b*.˒O橕>+Y3E|Lʊw l"VT& M7!#Ra՗/ב ZȨz8ysyJ ᭃMc b`(fCYEq2yi֠{-9]oGW}9}lgوD?mu$e'XR I8տGwU*_֒E١S8Op=I6?\ QX&WRZvDO-Op %&돣F_&O)3a&gSH_!8;kbz.q(Hn9{ -t]l>jC[WSݼ|0A#A f3M#^EceNpm55DwyyA@AgtqN!.2c3B}o9@Zɣ/f>-&_Q7^A.'ʆJ* V5: jS14fԨ1cljG*-RBQ0spc\;#hJ39JBB9鴒d`$[>J*UT:սmH툓?N~8su.QvW|=>y\`(TQH~1NrA=JԲ`Rvz2ȓBh󟂙r= 8@z}ww'w?G>;x[Շ6ɹOxQ/Eut@H ѺJi(/99iܐ2!sFE "Zp < nwO]m3z9oA" )+f9BBNBQ> 9.!2ƜA茼DQN ( (`b&2Ts&cZHSk^N%:G0q-VsmM_@^".&OSIQc"*dB* PH#EX,EU1QӤ(XOMn|@%OaTUMݦ&x=ib4(PgfTG`iz@ͽsE!)Rq<E0E0**Qvhk5*jm*m32Zmƪ͘Z\jZ]EUBR ,V0gH0o RsNs-AȡQ=nM8뱰q>kiGN;tQӎ}ڑV@Z^)&d9ĕW2MĂ|kX( In ,n)6FX"jBxni_=Y$EB)N%`" p\?y 1M֜oeAKI$AJɊ8v $\]<[jVtZV|"0m@a\񥙇kx>@g(ٔEG5T~!7gf+^jJ4 0-,cT*`^ٛ s.S\?#^?\ﮯD6),ۣ]?C5YoywrΤVDf z(h#-#ıDN4X5STCs)V@o-ʗU| !ZHU$)Ԗt3#X?@ s%gX"=1zk!9CWŨ &XѕS`v. 5%)S V\3E$; 8TLhr(-Nn"v@;Oڣ)&1͕ws0nSE4f|;U"Ç" 7ףkP)sYOp|Aaf^ r'O"#Wt 7mBvkN0Z8l!{X8Ɣ ~-y `wڳ֘֟>zjDekb<6QqᅤBDO 1B$?B/஥C,J),!S#qj0NF1Xqɼp%5BeA9$. A”sSK*J/| +y4; .?#TY^R@v&NoFQSsXγ$Eg[׈Sښb?7x93KBb ~={.?nB?P&2߄Yd6?O7$V zg)z: kzauN^uY&MfT#=^ +g#!_'mh7tv+ GtJݎ'nn}HW.%2E {vݔGNˆTԐjRSCq!JO N)# 1 ᢆ@̇T=.0e!A@QS@eS[?`E+֯t䨖1X6yڊ2~͵~}HW.{ʔq)J9ʌ-P]o127 Od}gC<;q9%6wqsц|.1Ez^D=.2G%lNV=vpyi~̛1ovd͎< ^‰1oKܵ۞=8q^*J8E^~v>?`#I#|r֮y|m:b30-%.Ϡ6m=d8tp d{Ϧ?hp;(8sAa`܈ͰڸblV4WS.{:w/^|6b1-Ub1WVZDI2JUJR*xN_;0H`X" {xFPҥi~ /Ң%ZXKkC9idQÚBxQh[dH9.e뽐ᄐ R1+m(*AyHoSj/5Bm7{6f׮h(N Ŕ{}8e=KPpLC9))$)A,`ttle.lra [\ت Na%LI7*FE8M40*`Xx&ߖ bkE֐[Ida*EIn[lДnIӈcv|q\Z&835-'^3 D^0/|4&rN4kgǘӞ`Hj0y,NF_P]8<5|t'eˁcR3^9$ġ͍G7=8߶phR7keB;Yуp gVE43GXw[)zz׻cw7Dy]1DbL<(^Ms=!Ѩawg=.N?.xl'emmu8/HFɄBX'=wrD`}M׭LTI$^R[Kj:_)N-Ì@(e EUZ_˕:az^ gV굊:D(7$"!L!D@W)Qtp.u6,%Hi$[\ ejaZ m")"裨O@b"x'o6Xލ[MFed !RF ?4GTZݨ>#,߫S7Z׊pu8oB9•ԓ>(!ƭ S81ӧ1BKpˀ,ӎ)jҎp܅>V/[lQ>^ZPs{v\~ڲƦܳgEuY@3G-Gw./ ;U;!uʛ{`՚$u7M-[wRӊ*+,G=f"ivcɦ̐u6J= H(Ʀm,E$?V.wpJ.~)}dgo ҿ3/'ʎQ' j*ut2v QY#8F+gU5!x]?O 9 CO[g93*6<2dEIc(Yd]Ԟ30h>"]':6\O]{} ,pׂq?Ei*tR $]%̆5 vkSI–9RkݸzIm3o={˲-N:hlSBu#DK׃? %7V%~؝Ъg玺=Zs[O݂!$({G\r ۡ ?0@BUK[ ;͎vaGˆ̴}qyN t#+p6`pő)jGaz\!܈Ӑqi4Cz^i#վK^rZ^>@ۇNq0%#Aj\[ b ,(/AgwrqΓ_ŭ$$p!% Ǚ'"!_O]nY+bM;S7#$=چ$bS0_pLdbSj2z˱(alah+C=KU_:4P/c_^~ax\wЎGf[.^ڣ#AU*:MVHQ%;>Vh/+.9>1,*/`8:^j8k1fbgMGJPpgԁ6F]3ZcKD)PCͧڎ\SǼ5ͷV5 !sxfwCg$(iC9VlJy2 36N^%!xb!N=Uxhê58@i 5`c&ԖvUuv;@tG|Dw4xbX~!\"w]@%瀜#asb) #QRBjO?A <'_:O_V[cb M1Ӯ1>ܷ"`q<;{7<Saޅɸ7zvf$`| ƣ]F t Ӵ]jbҏ(HuF̖J- y(T_ ۲sr]}J؛6^8k6YvbrL ZrҧR*v MKOh6i%C: gfՓRa^t1f:6[Y.vK{bx""M/G @x;q03:4M <,.\yE7 zv~!> i:i,T>VZVblsJbs5-XCk\LW; "Jɏ.a$M1\iGŠD ty(JPIٮe*[>(0"8w*G: J5J-rҲ $ R6(i@E{(dp;f ]x7GnA͓G3vVrc6ܬ ߅wU/ l_\v?$4fʹ2C nԤnTU9!([&yd8Txpx(r撞g%|4sWl"؏n> hJ|>rj+s:=a^d8ot SIK}YZ "fHCfʀ"1.!@Xǔyezu&mm2Xe|}kѪqabi XŇXX@`<^z'+ZqsdEScib5q1A8}R8YSWD6d2 ܜUNߓQ tJ>1 Ȑ pl3wxrP&nJ<'b01kM)8iON 'Gu++t2ݷ@¹S;Gog Az>Y vkSIV-e:nu$붙Z՞XxD(7$"ltۃ iws+UXegGj !?>W6!k5yLզ7r*.ҦbS& M>GfU\t/6Qo zoz.d_Ub %m-¨D隶dh[vcO$2` EO 뤭M(|Bܦ z5~sold7L컘a?;&\^}`0YG4~33Mw}٧Au!aj?Ht{fm“nop_q LR4@$.b+GT̥.s=PJa^sk1fQo}+yDf[v|?:Oilv/|b]цܤ6[qw{ۏ^}0wJ__ _)Ky~z?E\t:#x*4}﹧ӧ\5$A*]jt ,Tie' l>eM흧"Ϥ7Ody'x{oo}ch8^ ?qhs$tﺣ7Ġe={g>O|3d%z2 rii&#~=\qx&0.gsH?{Ix0to~#-/C}8"cI܈lq^LyjM&7 T{G~{ۆ* 𮝞4<gM%Xa?_M=cbϛv{)2oJG.K}K eq.3VѩFFAVtB7\ !~zRn^\]/wW/&a}6~j\Ro#~_So:qt0S^CO02t{]#@23^=3Gfݥ1op}Hί UQ&w p79MݷK o_t=40*ƫ7ۡ$+ٻqyTݯ #aZĩZxHFa)<gJKWCxh~ZUӻvsJ -~ɾ?>q#2r{I]nrs J{jsu2)n҈'f2;a~r&Ŏzq/ɹ",F]h}NSEwFx3tba-IIh p漕|p7'`ؓ-b /scg fX[V:_(dgZYʄPE]_yf`TyL|(1`#2WHY^19+I WE߽y]_hN>)wo s 5gg/fnݟyp8D"U6 5w/Z p3݊;h6`D}JضM{,!'g0voM֓޹ö͊YgUe6? l|yUY!nɽ_26V!x-u_e7Jj[@Y$ɇdž`&FCt9<_*TbM[UxEAR7TC-]X/ROxm,Jl:R%5Z>WUBlrG\| .Dki\HV; <* &/#/90yAcL D-zo֜02>O\1SCds"$B`ΰh'"Ww$!A7,XUpK ׶Yin#Fmwуtc;sm2ʠnQrMn&6?NҰ nke&J%xH*Ku"FUe6BKF#V@ɣѐ꺮u}']Z`n2ʢzG2O1Dl;DDYorFf%.H6:<]P^c,.~ *Jz\vjb*r5DJܛ:.hBRy%Yya^tԺƗ. %;ܛj"i 2As&NL*ژF,]4>"v_1HMmoSyw`lz M0+uUq]͋gO__jiwL5*ٛ#"._'s!e7ɲ u, VK%~v- מ[wu ӆp8[".I5DjƤy߱kOT̸+,9^~E)Sfڭ`5`}῔D4 g9Z=mPXW ^ZiRSa K)9kxybI耜Mqxt ;P !y ~3zt:ɳuLVS]nmT@3 %*Q q^e1KUf_R"gb~Qf`}EӇ <&68VWx֮ngʠr=Ń"icci妐) M\PNF48o? 6J$ kJotJdصs!WUy֘4}O7łLa\|Qc[< 0eV|\`5kv5%u*}9#} $ R>VYtDr($ <vEȚ&]DiP܂x~ZƷtHUI >яώ=>zy1)ѳnaC;i{-FަiPNर ! K*(1cĺBḽ1k*`3# ̎|G糴ٙ#dq5K _}-u+JS"47"-H k].R'׉lT4J BN|U;`L+;x 3Tj yL5pIRD}GjլȔBQ xc.'cU ZN kev6FdZB{#R|հF55h)F}.XqUͶ@$kR]b0\bf3w1YZk5eVx0IGMQq|BY}ʴiH9 A2r)PVhct$&{BR !.&.KR8xT]nRW-)ǘhR8?kh &K)|B wQRhIoH8R5tpLej,$l ď-u9X\KYI:֖y簔 nJSh'B:ӺZ{6ʸ+Ye WuOg n0ܺC`us~kVwa_mL%u0z`8p|T3+AB d3'x'>zǃbIa =|uvI +ds(0X跭 xUQ=Zؾb4jeƇxȓL,~1Ih`>#T*R+J2W 1 vzX Dh\X,$/Sc,8D@&8aAZ"L).PZ9_g 힛تV,w=j1Z0E\Ϟ"왭{_x6mWbA>%}dM@_\w:?vA\v6w(p`Aלo=iGag|fij 1}GyO61ڡ 溁;&`y~aԐ7T2=WZτ(gSKSR:D r69sTχp_=nw}n &2|{orapq&M|͔nI y sg945oߍhٷUw+_Ï .ó0p@o<u|ѻ~ɸϟ^/8to.߃4ڝ㬍_?\_ɏ/_vO{u_qv_~v {uf' .*Nsz8ji9?2V0/S5u!`&apiyp4e}>0aFJ&#MR|9?;q[6Lл 0a&N>)}Ϳŕy9>[&ɡ|f~m+'peڇ؃EJ}S}0M\YTEBsϮ[󗞷m'^6Qs❿9Ͼ ͡}0L?~~c"Uqh}pi0CGo@axݻrt8u{}ݏR |:z?Mۿ:cf`BFfL> 4*R}QwnX1,/|~;li3(m 9c&oZp0EG/-lΌ}YG&NZn۶ڽܝ3nFWSoʼn€i"g5†fGyS):ka0`UdfLd B4ta~8,WAy=ǭ`&Vl:eL~t(\μy^Sdo:e~Ar?Xܚ;4!gOSEFA0j-7`L58β F <5>kpVaQd)Gzp  *gH&6w1]j}mGp;p)OĀiSN4q.o#M p`Ĥy-T+];tL\Lq'+! s\uz#ZeX(9 v)HI\GJ:RR\GZ Up2KQuN2 zq}/`(uʸt.Bq)STԂ]w=0 aܽ6RÀH "K &i deV1ÍN{("x˰{S kOgXR_eN_` M ./,H(3@Օa@&N-:@W>XdY AMT"Nj8^`S*DNXJV}P5ت^Ʉ`DY^ܨG{Ut?N$%˼L-womvLk^=m  r#" 0pnK}v7Up26!i7.ltN6 ~#N`)0gnuE03 ck\4&RFHBsE^ԃb5\0F^ZlNP+9収ЌN˴p:yvk| cvL 5t^pkLi=A:HRorw!T!Qr&G$_c.({ںGcp[*WrUwUWo-{i G>bf8ӳ.mV}P%÷cx5O8*G!G'XqڊV0Lw}IvV\qɆƫcI.iU#P%BKJ>򁤎 k"7'z(Q I=Ɋg:Bdcܟv>}EKH¾ ޕBb)DG]k tfnDC0 d9D=E {RȳPN*wBB&evK5c{*q9185GF .Hb*n(TwK7baa,x`P%MdBmC~a:Nd L2bpIc$F;&11)F;.kwe=HiU3xicޙ'GwyB Jud)3L塔KFKb2"8F!ADc1D ^jdh5Qs&t;1|Ӵoݮ~{mKZ]ljaEOQw?NU"52v4i; 3& qL ea@c^#A0^D9 B '#[!J*JPPHr[D\\ERr$K'^q#\F&7 :. G$d 3"-|! }Q#H'CA Zz{B 7H!c.K*947"*!c"V*r!S}+zHl2I0.$M٪ ?ᓠ?&/5 bRt|^|/9#{ i\V$ mS$2vM:%a Z1F\FV/Ȥm!2}U`i@(2R#p[\Jia [NP'^XJWN z<9c*]k'pEkl(tFJ% =*~Hv rV(E>| 1M"H' ݳmzU*ܔWn8T"me /mFɜbL} CҘVyG1Yar{aJ0(9lUfC0 hsu@'c1^̃]?ԇli,f zx=Y>y|`4EN-@cA )eդUoIHS_ThQ-`E5g}[Nmﴼr&6p f&fJ8VLRh9GlL}XKpD0;o鑜KHR+~]Ǫ\!=,ñL0EzT ݗ OZ#\@k,mgrT/h[!sk.@ $ Y֌XFnJmM3Lj}D򩸁.[g-K&Lw꟠0k -kD+Y cZqsAL4(=^$@`% 3{|wR}e.b^5%$[jK%Z ) j@>ԪߥqRcTv//rSBhi ŧ*rڰ:pEU;FʡOR'(Vj= Nr |kFN~:?gȹ5|h?aѯ6&W3! DkL_ݨޢ<@8rT*Q9kLxBwVnk3G-t qiYcvz45!ӽ;`v2L|ݷ78Uc s{o_ J?2ʚ/כ}|.5Lݟ_"]>Г=dEngƞEȣ'&g˫-+}1aN52ۘ2LHMkdYӓ3m|Fe`06Gwc4rk}Lr}Uqs?p!' FwO?jԥH71׉3:ܺ> +/0}\Lr-}jԩr1ј7I~QQP7  e;nUMִ4Q\bلͫuntkx)UnWCeyO3[6n/G`ThؘB r# k54LBd a;h*0b8i>pB{T3͇ ^ ~~sc4Jx+oQ+T}Q=Z2`B,\=9qoٖ{ >.'8?XlZdKO?@+>ĄRO1כm~ڴV}ǚ<&.W.'|]t_B^RksgmE0i--zN0jZOx3jҳ)ޗX^I>2eKbZ /pwA.†mni&缵݆7.dJyn%1%OOza%t)lz դI't€+AUÈo?I?S*PU@ǿ@ck.]%cH,p\kit)=o6\#-ɫ6/;X ~]Q|L/Wj?[a8H;[./>]v3#ŊR}Yїz[$o~/B(YËV;pV:$?Dz/nGF۽0FmRI?hBϾ2K!羯f wZ9FlBҢߋf-47[ۛlP 2u@J„AE)F.\l!Aą3E6XeRR7K]63#JeW&`(G2a@VK&GYR.8~}TZ1f[1sH)95%kj'&-?*euǻ4/3?Z1VY;;J:VqDS촋m^\C;mY]/ivX O'ͨ#c~:)EHEZGk|:كFcc7,7Ӳoіj趴] .W_]YGfTV g$W_8phcšq6=0'w3,crh?;mš?],8O!d4Z%@E(߯biom;Ι(zœjQmzUl+\FTnnn?gY.^1&5޽_^_>=&{Ϗ!HB.4 C\ f*KU$V!t緄L#ZWUp+1k\:BSRr&vZasjt[c֪Bўʵzxf Cv#ږ矎Co!,p6]yUe'g`=CQ`Vknӱn%Qj=/n PvJލS 5}T)RKl޹ZT GT h'ĕ>kQ/I@hK>|vEeO|07l ɷM& vdtiW i0>Fg}N3GEALz8*8ZOܺ ᨳG3qϢNgyM]I].ӻ 0&VhZ Xf!u ܰĒ,l3J6Ɏa PXht jŋGS[J8H/ˢ1Ҋ5 8j&Y {Ŭ'okUV \i'1Ti[#7*igM*bbu.ʾ!W8ڷ ]Px7w6ݖ d%/'^[DA{l7#00,7h#(047hk '0%uJx+.-r2*_ 9g_?~UeTX~=8,zYB-j!sJ*+ٻ6$W|؝mK܍ I;3vXy GlPi}h A%/DR}TD( in5U @#% `ф_VRH3{[â-٦ pn>y4IlZV |wnn_W4&/Ko|L U^ A bS.%a0$El'y!3T>KdhvȺ"Dzm75^ú*vTɓ%J7)`B1L `F]pg6 t\:cVo0Tj<RM6U65qҿ6yeϮ䀊YCQ8֪eZwPKl-ᦔHbM(8XiՅ@U#2dI(HJD#:rɥY0'Inz#4 [32 Y)_/?ͧe5w._X==K~*.PW &޳`l1ѳ oQjgF"Lys:ɍ~ '܌{#Ȑ)F|{p{C̐㪔D }1'[#;lșє*B/?܏8T4EWMNKƄNg;<ڠK1 򀊄ÐKf:q#1}/fpuaV^"hD~~1@ I5E^F vsz4O OWtg7eS)7ϯ:6QoH]A~e.y #x.FPOjXN`t `Lڂybbf e'ɸ@)x/Ț65.W'R3!tnT)U?&i ƸمȘ(*JFRA͍CA 5q?Wc<~]ة vQ DxB+,bUhL `V8pZuQv"v"H֕TiZA(|tPk)dDw KNd (}קЩ焀=ڢZaGN(h\ޖ2^dNB}TneTx$} Q`T@G=DoE tHyKA;,qJ˥Cj 'K ۓ_|2L34{wul50EJ:0G> #~߉Ϝ@Kζ$ۮJ02Q"l~]QvLIXKhpGxxvkgg!@Yԣůz@)#呮XX_f|D%*SFc-25<)ޏxyRFyrFf0wJ!R`p j\~s<HI$0{8zP?~9fQ?Xv8-^?9B*7U6/rt+uAx|xW@GAv;y*Ͳ])WyMrjZ;orZr+Z誁H@!W ) T8HPoَ+8_5Z /}t; ~fr=_*}o Z}FhiB(KraEJ̡b"TArf$#TКI՝wYfd9p>WR]9πU.]^fdz5Yܦ:ӉWR O'O0Тe~h ~|*I>U]JV Bkjbhe*Q`o+=VO<j=9Vv?g3ꗻj۲o6NP=qΐs mY=gPAtjZ@׺Khsv'D.۩pymO9G3F]=5o[{]zo3 gT:C\Oʅ_h@>Six vOrӫN:}Vzj%4حQek] κ˛wgɭwssm9'Ҳ?yAe͹|zjzVpU͹$4"sZ+ >}=52pVڲv=.ڑ#kxE|*q^,\On30m%uAslnO1rG˼KgOi3|sAg&a&EƛdsxF8Rr<$Cĭ20Ztʝtt/>SmyF| 󘼂C+"Y)(欑4Ҥ|Zj{.jM8.~Oq {۹#St9?yz?SuEAn8:k;fÛNhF;@vxlSJkKzSpmvv=77\Z\u FC_+yÇT e A(.kԼ9.=*PEnvK&Z]ũB`=C)1ㆫjrVs"iWLש{T0DQ=`=T8/XC[}"uaΘyj?3<-Y "2 "AD"Պ$E h^I;&bҧ`RpD\IMN{D5]!D{2frpSSց+4$!6$ t#JⓎpI$%FQB%%Q%8cmQ}9YacAxI'cڅbD*Q*RpU;<Qrj]" hW;,9*H#S`r%-~"$;֤{[uhDP p}krAJ9;P*=Z|z\jtygb|C!Mc{JnI7sTGg7j_VanJe 3OvS]f!b4ǹOsbcJ}Z`IgGg5cEpƊ>ݿ\p.9x 6[Hd .`76XLc7d Wr^`\tjK̐|Pj~Xv tr;|,\lHCG%\9XvtF>Dڻ{w3iՒwrB:5)뿄I>麐u#p5{tQurͶo([ޔ(}ɻ ~S9?yU` BK[x@qBS M)R>,0șcJu—ݱ_0W$qUJ [(%Bx}dmJ9"r9:YUs b=zرS0%@:N_mi*Iyҧƪ ?s<^@E^H1꽼kjc߬5?u] DGlvt- o(c1?<#+cT[ә vjrc%rq-Cx?٬ZUi[_u|EU)%Yo(ԍ3jn7]^0F VPQP/X k!R\>8jNzFie.="BFd6J( NNܴe}&´^ږ |@gLm$.Gs Ѣh*hq`S_3ԘHbM(8XZp JsAwnNh6~)\[yɛKb eCPFuK˙yxla洷iTˋD_5A-EqӵmILia$J0 G/P S@ۘTμ剂LF+猰'&0J.9V+k?֫7TcgǙ ҕogRBxRd/V :̌v?޳VuFOꑙhMy*4~_r3\샲cֈr/Ff2Z5[ÔgӼXkÍ\dBdw>T|K\܆~ox$ELYS;߆vc GtʎQDΚBn'ݪ322`- i[YN1hC"81Vzڭ 9s-#SlQVb2s̉ӏF9>Fx &&<Y/bKRӸ\1@'W#|}P~ CIdMܮQKЋZûzprT\̨"؏LAHsg !r A1PAl$02NCn hM; %`G:DpL*+-\Ƚ+뺇UvC%xN škq qZIi=K0sA8@2{1!ue?9XjreWrǢd{%z"qTX](4R&,#p3O/gd<}h%חL2;+h )5 0rqh2(QV V%VwirXJ z XpyMb D")qw[8+H Qz7nVyW+ fNfݎxf9[hb=^ L(bt,Ōwrw$%lAо4kJa9c,`|ޑz|3 ˉ5bvnQLiS3[Z{ͥce:psi\EВ|L &ee?ݸ(gQU.[D=()l͡XW.?~q@ᲀ5wjSQ!BV3֨]s$bHTYUKlܓr~,q.]̅XSɬ-.`^![Υ?9"AM)u+%h?j[4I'o?de^ko%W,gIe>wsA뗮7\-~oZ :]rg᾽-XE9vzQ,we9#23)3zq Xuɨ)SQN'u#ZR炙lPf 93*E# l;Ikm1̠iDTLjt*L*GnP+?¤,⃨l)2]昢#1F8 4tT$&TjʼnHIVhh͵ (L U1.='1C,'^~st8|?q,~3:Yt8(%6Q?*cJK/ K^T1Gݓyèw*dp?|] h R%${U E%!ix 0H^#\J*FpPD,'͕Ed? Hl~RJvjV Oť *WJ:oAP\) )[6Q&Xs@7Khm%A \^aͺED gsxp$@MЛ=gJ;OO $ʼn݊ޏ_5=O]Rs_53 2x_9/C.FVr_}Rt| X\hj%VׂJ?nF8|W=|B3omƃDG+O>EveḴ2  YHafog̮5*NڑY uL9c `&Ҟ+*?~qXDz:IFO5iN5@~8WyI=Ti=-c K1iH1g"D.6ɯ$ jk¹;ŒE1;K8:ɝa@+6HP 2&BBlB[nOwUy;vu*5EYS!൜;NkAĎ4S̉׬mO艞V<-m``_PvMmux8NKkO*Zv$D'6KO$:,}I_· 'Oj)QDBOaדU5;-^tA X`H@NB.$ݙZbQꮆjڐ\;wZ6TB(tL n/Dn9o|q`fs}/#((DL1͕b4 2GXBmHa3P&fH랊qY >Oyq0 +dG aUYס[ *Bsć_>aZN9c⢼=9]l8 X:- XL޾uYu^K-bJY|Q)d 2S+@@l*#SmUī#@r3!>B|})lW51:F%hkp ڹKZJ:j hk{lb@b(qCA ˮI;s$ ll':×Z!9cAFH^>CH kOh관@∰zskKp*|*S+gb-) =bv&(UzIU;ZmjxgFznW;.1Jj<̾hHwYgoG=Fv\vN|zzKtAۖaw$ =?ǑaԶ=;8d;5:MUK " r?U =_&P[vw'PgLEJ%du:6a<$e=?K̴ vYv=<'ϕbiHT@A"*TYHʥ|+%<~[)S { ^f/dWTs No%Ush3)5 wS36It]`gÇ$ Uژt$șo1%F; U k/efemV)?Mx>sV˂^<[r.z l{8 a@ok~DVSd 8$v&e<84lCqΉ! …qB(cE+Ƒ"ijWU,r~q0 WEe]6 ꥕{@dSIK'?U|6diοsGo>ȔUi㛁37NT ˆҘ`9jU (GDACTęX*HRc`%fA]ط\}/[ ݗ2|Ͳ_nϮH'sS+w~oRJ].8qijWPf2>ݓ{3hDjFmUgts췕|)dK0Bzș8r}\Í߾5VX'*̀*{YGoiL0%B2sk&cY?R@"j)ٜ :iu QVoA斂zE`DT.=e(dN ~ _K,p,]g5"@  5 CH &P€ 0$0b)I$K;u*U5XCӋhGudߏh/g@9GP{(QǁGPGͣBPr * 1J[LX^Y 7opdJ 5( )aa|>'h䌠36hߛt(HV*rarO{:h"f7sZK*JnG PJ0uw'E{K?팆oAC[Alyrx-A V=^r-1ٚ`1He;ԜKTs =~/Ŵ :@mQ/#3/j噦[IX/lh/kG.e@%hI3s|'ً^lӪ澋Fc Zң,\T]s7셄 -=+$LJ~^ƿL.:Lgs5]^ : ArApj56.טMkKE|f-DV?O%' $!'W<:aѧ䱵lMW?DWQS!Iz%>rQےKqG8͟8QR{Dg-9T7^be*]ˋt.+IWHbFS}Xƣpa+k657R먛>oP0%!g4% =B˜,#b887HIFk$j&蚎-kFyj?$v=$" v+A!qttkkr^4PD?6^rhZuY>tb`T(ue@&eH(;?zrH* V!i(-^+W/] ɔfN3u NQ÷4:JI?Qis96уJ7O$&g1z2yBCdyr*Y _Rc d}٪t:Y/*&_qs fǥ{1yX6. B3ILl$/=]ĩ#@wNYQ7XSphsc#a{%VSGc`tb@1s;f<{WlٻұWvLzڻSkE=3Ifc y ҃[|2wz"&w8  U Q@)qˆPcs?$o<2e2o,IO~j1k4*熴_Q(O ?{cIu^mٷ\[[[Azf컝{<_}۸*IZ[Yb^܅Մrr$" gBcn"FF80vŎ ԱR)SE$4[&,;ޫ"$WfxI.@v:W@lD,# A@b{qĆ ֑UfkwemI 3Y>LHcw_"N6 pxh??Y h\d5)+&X2++/Cd2EV['ZPNY y*0ņhFR( &H*X; W0yfXtph!#1**=;qoRl=kτ\Z-|O\tbjn"AɃ':{K}N߭"ȑ sL滋?sԩbrݟ77I򫳸g6yf}Rą˫uO[s5 p+ | punogՇ荙nze>sU&{YpU' ?K!2 b M0JAo|Y:HD |xJ]SR ݀@E:PyE@e=>!je<8`㼖 p1e{k(w#IiGEIi{ԈagUƫ2GD7t.Enn MЖ7(JgJtL0\1 "R :6ҎW35iug!~azWҸJ -!o^Eew7@vMW{W'@_H}`a,[3e ֽ%N *"BGj Qn3Z Wҁ02#aSLB< 8M -L-y"&pბ@$Mdī{4lzq %3˫íqa\>FyF&F6fOuB۳3fܜ{4"9 TpѤ9H1 !dΆ(%\ݏ˝ST[eNdV֩I݇JB8V*K\T2jS(HBZ DA"ژPZ2u܋Z:IL _$&"Rx$ Ǩ9:4YR;.P:Ysm(ȆB spAS $8"HjLC/1u | I=HhANhHU7[kjvs7NKrTY襫 f,[:_} 3 1:i`.X$Ch%i  (9xi)SVq ԇsTǴreˠḾePQm.Cݝ3JWJ0(aUfxH}dz/i0#?ժʼ?z7ju#e%i <y'w#ހOT "Ǐ~ 8ۻ&'[<[Ε~g:맇'VY/0\~{t}WM)J!ȯV~ƍg2ĤJ :ŕpuu7N G+$!09961č-NQonǓOԮt*@G+Ga.BeRcVq,;qe۞5h/6b_N >1IRЎ>U)qf2GKǪ!$}Fzbhtj/zȬ_ˆ &I e uѕ?zMQ}y9!SNY}vik;hFAMTC _F?^PC{y1G\>Ɵ ߽eCR?BHDLdOgt@qFlЄg0[_']39嶻SƗk!9B\T, + n:{ ϟ?,w1h u0ި5DP7O͓fqI|Rcoh*yvQ!h_};d{݁WT~x,g7Tϼ0;L%ǫe-|3=Kz%ҵFB\kֻJy U Xy̎O'I"!D6u6$"G}!W:ԙpNvrЕ.ֺ~ר:}}Z!=&5Cd!N`C1LJSjՀ8+/ҶEVHۊmJEL!/C$GkENkBPm Hyǐu+cn qVvSAI-6*ahHRN5ڄ!(eZ@Pd!N^ *N B* >ˌw/)vc{T޸t 2F !fbMv*+ ZqEhqY;}W↲أH$R\ޔGiRQ0$u k:$[䉹VZ1%#@K5zͤf)9Y^@t8ğSX6ORC& TƇi$DŨ-D:2!}%ẖȤƬhKJSs#]/hsZaTKPttPc"7>Q'! 2EAF5Oz1<ѕZ?+ST[{7^VfED27WUciv7%%sNύx~rܞ]]ڳygͳ} ƹFGH(B6bSdḥR$蹢%ofbulphac5 >Gj:d`P Ik5DI7H P QDJ)G p!(GB/VBlDňl[p&*m&$=hJש'ibL[r6&K4cd !E6f& k8pR@nF)n{  Mipi O6 po1ڴl1OW2 ]eT|Yrc$%0x8S"rÇ+Z%9d#hهKQk o&ȡ˓\jA^sʏgyXEâ$d"ZAG>S jthm, D *~+L{5K4n|Tј:iV)aJw 1 @sX)6dat׌\ eǐ!KƁ=٭~jHSv2՞;hh6 7^n Ltto_yob~/ou~/{|ˮʷx㌫|MImoxZ;lѻ]@[S.#^#f 1DR CFp5CB#1~$0H@( xBHKdp C ?!`1E]ץ[ؽmDHhF MfxPqe=GY;"],Qo_ѡS] &N뛓#;M₭OTI45  :Z+%Xn<"F*~ wm#`8\`}UpC!.ԅM{w[dS%*KCfolPJ犳AN!ھzt -yHF r,fw hȰũggm0/;y+i}:Y{n_7[jP[:QG/T]Lk D{oףQJu@ۣXы$7LWE2G=ݒCs>\y09Wnb ޟ|~W9]c>(c&yzo4Uϗw:5Zа§ׯV7ߦFlNY9fY8`kd!/DlVKo[֛Mq1 VA̻Bi Nޭ^5ۻ[ 7"zwx4w+ tJhE`Dͻsdz6,䅛6R(z9Xö=HZ=Z=H} F1QTm 8ŜK78גS l0=-h>vҐmW2ې6EH hHJ૓' Bn징u rpuN?_+RNmeG'UCb_6Dtzz-,Pނ-тMJDZ$FГnPR4CUpH~IwCJgOgISNҝr]Zu-e}IeW,tRJ\IOƷҊn/㗴t;~vҠ=n-$%Qr`-E04aƆd;9\oM{[d}d0)T#%,5Y!ݶyaKC Zn +@DݧWꆈ rKЃL%kK#u,ۮ86 mL̘(boUEG ^pSL4nG׃x!==>G:Ep[gp`Ɉ&ӝ/*ȉkuw/f*+7W$cIyQڂlHTw~rܞ]]ڳygZsټ_ûTRM>Xp[29W_VB(AQ+SH+)8r>Q7nY/^y2=g_vxw'sYovTSRˉjX Yf4"v4%@Ndb4h% k" $fǥM6H%oKH&%T:p%k@]XJ}i6X4?vV5PuzIA,FT0'ZsQ Qh+R*n켗xzt*u`ap0}`j<\j5Iv 0`1Ɂ2Y3T1ez%S/<@Uƺ՗ќ_9Xu*jVm옐N'n2̑'Jz謌fGg \6z>L$`CFTKRi8`5eG~ŧP%llI.us\t qE()_>=(ۂ jG =(Аo\Etʘ[Mj6 VA>u;!ͩ;b)`uBCqҩN[,{!WbhݒO{Ӟ0J(ـ5z4T}XOorc FcT䍍1,j<㖵ߙkֻYCDV76.n9lل ER޾G*ol ےJF-_ŹB`\;>Wfкʂ-f/FTw}SPQ4_R_ߪq>O7 '*/uP6^o18j10O]ү4iVcYb|UTڢ^5VPel姸lWMXȁ|:}_ ӦڣV]Aן K˯ЁXmhO uENo6ֈZ;-Dr<]k? /O?^M|}.ڗP*3Y% V&`ϔ4X@v^-v9[",t'exjW?-;YUVg 2폍Oe44x.x.%y;M{+i7Fr&j ^;oYƬL6xQ Fh594%hY&E n+ij@j7Y#7sesPYa2GE R&g! `˫?./`C9&,MبTCUslsG[U0IkT?]PwFW%M=VITM]f"2*BRYʔ<).pRe 6dDkR cPG&#xȈ ߖZr4d]H-튆q1HVefˤvF.'OV'$0-]2R9f ǵ$J>!1@`cCm~ZNA@/hMLګl 9diIIxU AUi~ˍ5TC[YU- jͫ T0EKCY,Gn!y>.I"foB|߄ã|C0J}[̟[Qw][KbuPԝr!cvh>o+iOU+cc]> `r]WHݱϟcg1@&rf4-w@y@>_ 5AWKj@vwy Z9Ȳ"}c!g>)OrޒvΦZ)"u3lԤkt^3eƝL;-hSCݣC{t.IJ ёW2ͻ{˛"w}Hk>=xחwPI#ύo!5'[9jx&ӻ̺܊w׭"uk[=vC|]pET=z -@}{ͨ,j+y۲w.w\:Da^}ЂOnQ[isq;g@M'}usucS{$4{ڃG`BdUA f3nג{tײkW\"ws̛2-v+#E cg CWqnX}3DzV`kuiQӨCbړY`(FD)׽Llc=蘵<.1+ҝяYŊY|w11+31^ǬMY2Ȯouy;1+Z-?9y:\ } ?c cGq}oz:dxH0xeԡ#*g"b'8Q!g!ZV.Vgyrm'Wޮn- Og3|-y}൱ǕRF cZD )NZ0唏 Ftƭ܈hY0HVDE8apȑLHV>k=uq 5s1:f]68P1(9{ӑzϫ^{o~SCi5֨d*1*ar4i6e%Dn;f+*kHSބ$1{E,KmǶ 0%/B[IH:a`;ի:SJj J'!<̬-=Y$A#&EMwE+5Fe'QsLipFɊe:qE3YlVv* 9{(NZ)N5~+03\^s"ewE#"-_><[VBD%%QPĶ>Jg+T,5 :цԢPjў4M.;ռ%n.f7;̫˷W%;`wIsThaEn%j\"xbi5b曆_~;o39CnńvUv~]t$D>MZ^TJOa;|ysخ\ww1ߌZ9t9| rϱiz-ƽ=潻z]HϯR|͵\^-?"b'̚ҹS\r=y:*9|A1gVcUO4dL $ohs<]E8;*&& 2ZޡsN$1s tY'-epI}\Gi577(!ɬgU# 584v`2&7Iی* T!  FD):ɮS"QW# Ü Lza]TRf'SjE.T?)6i_a^e^^ k$~tiM__}?^lj?7x)WR(_x秝 [|G~S47ݎ}f)m_W|\6yJfW56xqA׷^ @˷O޿3٢g4=U}~,-[4kW,,dgm' '14 b K;*E-kN&8Xj˵tgjKjҍ[/߬6lm|udݹᤵ UKӶu#uցp^.he ~#ź_TcM#m'E3%oyLկ(x!H;NF* IeCh75: ܶn)rDVA>u;cOs2c[ꁖZ!4W*zuuDVA>u;fA̺UԺ !߸Sȸ4w"zZcU;[/◆m6Wtl'g˟HRr&y Q`5DA$aĺtO O<V\+{4jDY/.[cr!K ]q#ڇ#J gUMw/ZsoE4} 5LC6Q`?hB]u#i8myfy٦Pe"6.KxևUN~+Ϭs=9f:r58 Lj[tFF NA%(Ea2fzȑ_R xn>,hO<5:ܪݞU%9U'3W!-W3E0d\_= HB xzeBY&GyidpoU** .sd P&N0f8}Y!ϊS76p{o<:BC'H> 1Q Ip8إ؀=a%ĞN>?78 s+$۾v0ozx1&POwLN^׾299#*h (!^ j,ϒ?\oKh@n"[(hqEA>TqS6"'Na j'GkN>{oZO@ pV͇sΝaW9cԮ߼o G-3[߬( y"}NB Wcap+LFv6Dx'8APxE1SQ̦(0q\,OF00ItC"ź)3_tغnyQ%M:Ǝ:˔sR 0ܑe <T!tjR9^B5µ%H@'l^]g=_1?y#t11pn_2X%iVVa)~;&FZ6ybPʉ Ĉ@m\Ƨ(`ڊQ3̈Rt})8 pﵑpj]a!4ь{IAcł Yt\C>(e0d WL`Py¨R"hcEcYq! Q=UZa4qH0"˃5HB㭏;S0LYy SX1G8EJcq6>S(/g"/Id'`[ba.ԑY&!~#"Ɏ*kTRx"4I S# =Xa]wȲZZ\3!MpZD.Q)-X!q ̀GuصUA8n۷B={O̳7޿^~[{92歛E Ǭ~$AzE$p gP\pp( 8c(v:>n?~~ўr \ I9ywrsFؕdI|哧0a AfibmH TARQ V4u< 6#Ϟ;.4#I ʸ1ҁ;"*$ڠXk#jB HAhkD8Ѐch|S*-睔%#֌pM.Uߦߟn"^}i삚_zz/? E1:'Xse4dtu旟Pǃs3+}4 ;7\hUa6ɬcDVFLfBf#C ƖS<.?I9,Kse41H VfNlP?5 2RWmue\Օ;l^XOdxE [{lEw"zxW=L<(0P&|'?W/JjIp7Ӄ3p;]e7⨨݅M`m<{tpFnaEɄnN oGWoNh ݹ\'=>evT$!lZ:C~4`!1ӹNs3vUӹEIylI不i%)$\*OTjDMB -2B` vUET#~t^!E<7/\TVP/>-s:ǎՀ|׌qQa0KLz D) Couu^EcPHtPbd`FopH$7߬S-׈y$eIx).+<:3o EHjg6PɊ. /Ͳ:mGJ,̊.Ί.\DdJeʔ:");ZUK;89;ʑgORf2Ukk?>\[~q=[qNy>˳zR ZQAeYen)h/@YZ3ō;TtǷEfWoSsQI~{((\ĄYݎ ZZm%}ⱌpZEL` LtɊks~yZe(cG-;%KZj^QA AkZ*IWy8kǓWD$6قky.Z8+mx QČZsv-,` Ir*E**ŵ$ fbVʆ .'c?4SW ƴN` ;4)#z, TU/f#2ڍi7P56sB4jOa KZj5voaWS-cC%5Pӯ/u.P[ N jPųZK錺p d\u#LF"rV cHNc͢Wg-G]e4ݰdIE2!kTğU ޮB(bg2ͿAˀ;bثo4xa(=VjO/X D@!<1EuX %]ǯ0_FQZ5;u2LJ{F YwDV(gE>e:`Slhŀ}\32a&q!y2'.+/؀X ?}РsGv㨘 ǔf Y<(Hy+$$t\>(B}*O"!=V*E[sAHO̿uPl-ax)zdq@*N&bIkBv?, ]XcJ듕bk5C|N6g w I5/%qܹ, ݀]+܇( ݟTGtENI;]ԵjUH$B I^)! R}Hk4 R2l)di>B__IXV,}ѫU"@V&[x_,W僢<\Ht9Ӗny=S#&V:8m7QCYD*Q7hOxro#fR zN+մn4 4R#h~#3{|t54GewB=k /];}^W/9],rWln?m ny?Nj3ڳB(p DžCk hR:/Jyc h(c樌Ik2(Xg2=:Š~xo헋QiY{,=򨔅b|d»$"TDcoț{Q{nv~;}h9蛏DxpU%R`߾}v/w‹K1(mF2农Sw\Ex 573_>Fȼ#ƲA9 B߱Ug LQ1V9"2gXO"`m}eȿ{; 9夌c=S䦈zK_\S7|qDZސ[A8RԜlL}ɓLřڸ.+ M>)yŜ8zFj_Ϫg]ͱ7xWm?8 ǟh|=~H6&)ٖd2lL:k9CvWk52SYWC>טUel2},/0]@@|ln+vYWx_RNeM$踀I)F[PX C㚓-uY-lNw 15 A@RvG͜IEl@$;?zLns,xB ("#yad^K:Q/^$ HxQzKΥX!CZ8#h!"pE| z˩6HfD)}QQKҗc :{y]Ӏ:ݐ:G\Z-1믮nR3(~^Rj-+rY&')Z4XJ}^[_sDԧtĩ!J-x`ʼP6}Wfʌ7q?*6\0 &8)T39w[πud.&+"7GVH1nx׈JubMr@6\,دj>Cn|+  %?vA~rxʑY0dbLPўF0E'.F?Mo)sf&fݍ]_!?U Kq"@,E66s-yE*~48.פ]ovEqW=90TL<#GV5s5O}}H.:iD|'FM6us tPqwa9gJq `=KO SLc2ɴ>&5n = a<XPFb);!\S,1Y@{Hf~*-Żiߧo , o%c 0w18, D+NkS6r+w̍u^] tj33%[λN$NAYm'y*-w0Hd{2W|(W2Y, n!7 4Nyfj}txuN0x Fk&Xt..Y2Kh ?5Ee|s ZmW-ƞ磼UEcӏb vӒ(FyP.o~F$W7Z//`ik8?ş^5grfzzؗ'F/rZ[-ペZ/k}FF91F{ï)1s$ JCۻúyr|0RsAytt alRm4 P3(ck#t/x\=`uB(q:zYe{:( Loh0>ɤs?8g~4?"X b.;V+I7+o~K!)V=Rqj=bTFk27}?zD(N5pw[p*:}rvsi\I2_ߜoϯ.SGVqr,<G0DGG"Θ(Y dT(ZU0 $cyB-M6cJwZ!p>> 6;th2JɀZcUVXi[ʘt4" sLKD-9y׾䋥贔(&q} q͂F&36>N CuGu2='(gŗJB0xwWAϐ IfՁ,>=RDKvoB%hc~YM(w.Rb[>-y3k Y"$O9"J1^굒: JΌ,BKM{p}uyY J;?] oC]Yg."Pyf&\Ů:Y0>PT*'Z0WoZ|Bۭ ۏl#hD xOkwͰvx+vމR"""3FP]*F1RSA )"$K7!XG0PNӒ켎h`cYgd& ǯ,AxnM\Lhl ͏ A}T @NA} 0`EV]AT3CmBܳN+CV*\- f1 i+*ȹIPt9b+UeN,qfrDysDh5Ȥ?=g ijxR>GhJ!,#3zAã3it%Šшa Q6hjjIT!zo=Ob4;+Ҩj5 oBU?n椿6dZip 2r$وcRo"xE@ǎ@~- 1UAGk[9Ѩ݃2>;5!WS0q yExG\2hK E ]Sf1DNAyCy3F :.KTҸ *@պ n[2ʴ$R+5w"Y[jdT?L?$:T}-j*ej$M㈾G M,@jtQ U0*Jhʔ#B9G]3PkfbtbM5K\ICb hXm6fl B:m0EULHV"*%($RNBV$%c}+*VqxJR ܽњ;Rsⳮn.V+鶖[OǂTzi_z~7gqsoɿ9gۛ+wt:ǻ 5ڪkwVXKmz'yyw?CSY,G}N$곮M,W>'ݿ$pm=kSJkEt/\^jd!oDklJ 2FHޭөFw8TѼ[}pwka!oDkl{';}׻IfUb0V~(vnKux[]ynB޸VWOx7zZ rL%m3ʡ[}[LB޸)%=րbb6jpn@XHC|(ay])EJ! ͙p=  ɕ5YSM8-hI޳R# r/gUH)LȈ_T–gt3@v)/W .岠?+B!ykڷLWi2p` $~rDpTo=6Nr9o>DgF]uF jwۍg#ֽQg7f~" 7F= TiY.K"|6 ߑ!~ǶAV 1}k{vm;\%9GBͮG),:},mIq Ҳ'O@xA'Cə9ԵRޯb)14#I,&Hi1 t@ǻtB%hqEhU|iw牱Bqb1QDZmh (`vY0U pu^~PM~L1|>Mؔ"AE`VH=aX.<t,TƠ_n-SX1G"%8X+9c Tog" v0(yx9BBgj/5ѷzDؖqӄy̵$~$yC0Y\=<Mʦ(On\̢@-9S6:^y7ӻ7n60 ۸[X}XyAdοP=KYW3U|jg%˶9 Ǻqga~ Zݯiȯ {YKǣsU˶,|c֐+9x,oCݷi@hL5gμ G^_kơZ)uԞGGPtϴ'Y d]hEZ8%aI?*aO{\n3gHaUKT6iWH)>r(j>˘xhdˋ#3 DRTc8m!a"EhuPli=!CVWRc2,E4pPplyX7&.`fTx$n٘-Q,BTJxgk]V۠F^&9 61BP 9; h9Fj&/~MuK2l'~^qTa똆FPă*Hi;2RhBKPS <1Lq=*X=9AA"#e Ɛ@$R^Ι_\k."pu]'j7'cKzK?tyuAV[?sGӯ6U, iY hdBb-P=;[ t$oϣ+n;Ct` Iq 0!L6*rjl?8},yF!.G$.^(c$FF7,!P.hGqF-Xo:=DC$ fW(g\٨6%M0☥L@zm}6Jm 2q4c=EhIVRtƢk d J*oCFT"be㔳:F[2&5b50jREYVwmMnJe7{c/VNUvs^rj@{͌"i'߷!j4ԅ"H+㌆"k4@w  uxFoOA]Q jLMǤ\2S<VYN3#`b Js9鋁ġEWm#ɂ |XE;ǝi5P(9#[B9?d+ߘa-W;4lD\w0ӪCնƎS]èF]Ru~MgXssrWξ7߀26c-3So[?=Ş}=ҫUw?}aą⛿!o^?)9+#r7{uw&'3C__ 9_,Ǖε)M{ߴd$QWiX~6KN_{+^) ɯ&kDxWanM&  N_c ti>~5)/>!* mmuçte,1^\"~~?<ﯘ]zsw% E[t/'sɖiQ"=}gʛVnneo߿sIX~[騁~҉S[D i>+z?/A9;3`.R"]JOJm j, ?;*#nKX_yac Ip_=*KwsB9 z'x ͖6zʈzmŞ@R&nr}_Þ2Pׁ/l*“ǙX>c&׬H8$e3.D P8_{}A")Cw[Fվ\k{N$XP"ٱչCQ9[lw^aʄΙM%T?ػQ=(g7qV_Or!*1фQ钌([ IE{^WI_,nX>Pd{1qgIRTIE\z7)A b0rK@K pFK#Ӹl`J3:{;R]ԋ+uh|N+,5Gg!R M LB is*:1L=g`_\Nœg{C4}_NLqA8IKEJG1pN28$~ü+97 QCPy@SA#)JA;5:*-@m-P3n5J)!T6l!yFBx0;Aʱ#B,MA0ΘrSoKqsH A8|cㄗTWnX`h>Nӑ9@V4g"Ӗ V7q ySﰥ \MɗT8P 2ǙJҌ## Jk …R4er *KMvD*?Le/BkMz;-)po\ X7K4Yy ~$׋q_-.1YV%ߟίk7ڍ?vS>~>$$b1h/0@t X߶ 1"m`iӏ݊ϗ7J*! [[>}[j \Xڜ,?$62 sDN"%*8. O=eӮ܍ߕr7~W+Wzg g  &`. 3kjumL]2BSf3f(SQO؆^l㭥V(,8v¥KZ.~z~ 00M9 [jjP:f s`uLjx'(4Ή7L "3𿃥:(m|G>HTk,%4)={il ~K |LH,'Sf)GJ `*)D 6H& ;FdmWm;"n A4U%ZXD87Lã#û;w(ܿTqnH~z4ǘ'՝u/z]w0w``$ŗֿ,`C[c4e@mrK &N$8r\NW(`W,ObN@,.Ŀr1c[N5lzktj^ʿ 5u)O#P>|E͖'vnq)GŅ`aG_?vCOh=ͮ[>~/>O/{kN(zH](}b&S׭'7?.3;V/ZNDo{k"{wFO1^')B@hc̐_sV{tuEl=8 L^OƽMᣟb!$'#6Tx~bp{ $lu1+;,1F7t(f YIc$uːLP]쩒"u iǘ)[~$RZiF]*{ppz0䒘b1&]J<:808 5. 7a#=?sҞ-F0vuN^j5DhL  ߐM?ipOp$ (q,"UjkNkN.q <4&jRy4 CowSfMXn잧Ip9i1>@1eHLŊ3Mʩ0r@VεDeYJR))3rB(R<4p4RN5#1sEFeʈ9zX93 ;M;%CdLXMSGr,im%F5fBN 0}~'uSu **1o{S8jOC*IRUꬓo:)󞒙=ޔ.36nN%,s֝U߷XjW1x˒ZܹC^_{`Ew֣(S~EXXEP):<ץTe>(gxc5q !T*uҡMoC9nO(cG=ݦH>dk})[%`Bf(QΆB` Ü"VYrg7[Sf,99O,3",D{Wn }S/|q~N1  \WW/>źЋuըk ijT&~yO*\"]=3Sd&&k؉u ǩ?3.q!$5(fjS%'XZX)gh1Z#TkҮM6 yA0i1Jh{& njJ@yF f%{#͒w:ar٣̯(UƈtQtҁt?,^xn'!g*Oc!B>7},qO`K Vp^LLd4Uѭﺴww]'p7uF:91rE}SIY8 9T [4}ہ> N!׊[G\Lwo\ 7]|4_>9 T"<&9RZigRR#Vii4kc 6iJLwmm|Yl(b@,$ 'WXԒoCJlޤp$30bNW}_u(k( PX1 A2ǯ|Y+N:'G'7?W$*s `* d@a P aHPXXm@*Lzєa8¡#sǘpxE\iVBg fd0R,Z:lW.hyӲ6Qs6Aw ;aa1I@úFMmc ėۉ?yg">拑22':0p0 CZ1*-gaJ޸yBla~Xn96EhۻqĻr1(c:mTnG\Njͻe+Zֻa!_nSR=4x':LdGsb"1ifd(6{Qkpۆb\$S_M8r97D"ݓ7?_{Ը~pKWp92k6hٹöCJK+R"' 0ijN]CS;nn4ﺜ0r<9`r_^քR$U`pEg&&#gzo^,Hsw?R9 /ΰG+qx;'ɑK y愐Nj)g \BwC9*M'o拳F8Ĵ/+BL v5{Oq~$)f-uȐ1T=ȡ5;?\N0=NȨYy_2%EGV- [_09L\lٍ|khZGçJ2`hqo(v哀Z%JTv)9Ms&DX%/9E:o'ڍ8!~r;{Q!O{"ј9m"@ ݎR:iki9l>F/D V\Hx8{v_99rj4 ݏvZsڴȅsW^b.T)now`xs{H v?4X(pZU j ecxp5~Zd'uMGO[H`Tt)!hٿZSib$z@=XISN~A#Y$4LS V;E;$QMP3[)y+n>He,҆W[R(C0FZab3j*'fGKbVaŷ#bڷdzwJ{nxTRP/>ѦӜ/>`+. 9)'"H!d޷Gjn>.-ϫW47WQ(,] N|6w>t :N ar5PktdxTz1:( KV $W0XF1z/zӻ9 |kz -Y0:ߖYfn`;k0d*t;MKޒc@]B zڲnRo75f@9bH݄p}nE 1'xr:YV|a@2i8P㲍%x&2 \4rd5D\i`GbfkUBz%ӑrJr"ARi:!Eg"^pS kƖj0޵@~I`ƺzr41q4#P3@ۅ.@~f JnI$F1-0X s I 3 I$Qn6+:Q> VPY8Qn(U1X;$@RÕ̍fA"=ˀ!W,|u<G@1;+-ܗOb@ڝ "K.~Ox_ Ž_o=?K&~㛋~-z)x8X;%X?ݏ(."q [}w{w)&AY&Y|qB> EtJ-a!v(la2r'#q2r'#d$ [b nBx"#J3W gNl0 5\-PVBn3g JqㆣmOC ,ϥuT>U^7Y͘ <>0L 0 c =m<˼#ݗ{^ Mn*s.C!k ڃwmJԮM'!= pL]8nGuƹA\}+) $evz?n'2CrNas2(|?\rxOzfwa2h3J'ȖBrun oƷaj]m h3q, OC?4(0r cSB;JvEte@9}9LTN Y~4jL?mc!yTM|?r gNk]茖E {:`z%_ oYQ9[_=hM-&yrxDAmw;fvTּ[e[hM :ndyDAmw;1r*yk-[Ѳޭ M4Ǧ b_Ooy7Hc#z\ ʘNnUHIݞ[~ cIV|&zM)9^5z!@s?̓Xyh4!N9y,)"-`9O)HY]OT ix:8_;Ck.A|G. jrne3eOw-J#TwT*&,ނVzmCe7M_/_Ptk#)X/+@wRqӆH:k Ԃ<&JjF4QJuW;QQ+0-[3*PVzA( f"/Svϼٗ!Iݑ(/3@ acck 1FTZ4sTK1N)1 jgj4s!ޕ6r$Bk̖:'m ր~فg7jW|KY*(-Z"Ed\0^S‚C#W$o6`éPT%|7 JCz4j2[}-~ YD~Wհc<mSMv&Idvv:+\ԊJYSnil ADq0a8Whi;2A8,3T@~rwrQCrL%r{ Dz u.&([7ʩ ^(S9ÊƟ="c,roTz011ѻ̣\fEqA ]YX,Q]ؽStR=]I泴܀H@}FyS2(!8dѠxɃr Z7}*/j%$ze]" ݢ6($H蛁>e0boTS r]=y{5 8Kިۃ,Oޕ~[Y5<\ Ě{:Qo+@ERYR#8CCqw9s ,p_Ȯܥ촴$Hy.mdFF0{#]%!.w7meoze@t W 2d?l~ $/(Ҭ}saPHw/\ P3T0IZ+F=?UẺicpS4V$Y!FFdn^xx e~A9ҼXp8\:8alސBč>Y}yPS8qZoHd'C9ZP}f ,0~oy@MC#D2|. ]N?#Z\Z1+g;T]dyv)o#u%%FN4Z$!M gR҄ĝ>GI:I+;vs~k_gOQ݅YBtu}GS/;?nEyXR x[DՌ#>2J*HV?y:,K5˼bDgп5eC>(\§5sV3>%YxP >}Yf2ѭfpn\j*`mB3CLϞPO|7vxW% EHz-cS5 fz|B/)TE]bʂQ mc%455/Z*#i,i+Yy)A="PG+4eaM4P!1W KYzg!Dhem:Fkӳݿ"_3Ǩ?&|pɒX3 Pz<\/3l030_"c3F Aa= {Nѱc_ oӸos^4lᷕ {?4C0*hϛ ]7Q>?>Ź͆#Y4i5(hIdhEFF°Ɠ0z;bf3ԥ7 x.\փ`W7+wî?n,upߤ6YjC|/DZ/@KB5{Oվ8aV(NRC5l4 <>̻rnZy"&$ 'X^s<'d F_/K}h5hO˽10EwƋ;cpkX ᤧ=p5dgWJ ve5DRk&4*faoO:0f`[@J\#)uLՓ[ `԰eBKzՆ!]Ya~/A'Nֽਠ sF+DW>@>4/|%؟ kf6MjC:d,pLjyK(شfCg&v,\ iqgkfCf*Xc䎎-K5f+*vY< z F :ׯ2 Mc_Z#ZL 0{;:t4^uJ 2l,|QxrLsEn*m )$ʋ|Pḃi!èв\ , @ P7k~r(W)k8Ta+S0q׃]M;nFȠssDBB]P ,O}_<:W|\"[)ӷ-fā51U%P~ᄖo<* !NjG|wN[7?9 f@KaBS+=*/aHHIB"#289QQ_]dP AC:i˹F D$[Ġ=J93dF#tOH"|/H_?.7K,%.jtImwxCӚ, ^?iaVM_i)ⴟ>g樊ЪwW'49%t~ҪrV5?sigoW\߃۫+|&pmQ<}zo\^ CdߗA F &ɞsl E2j9d{kH6h v}Ԯf5K@yW‹<)[V]'euY J,za^8PTJuEp@Q 5e`l9p%geK t"I uA%a2Q|H:y & ~֒.IL6S_[O]}{G؞M?Tcϋ.w< ?@*oVVJ!*%cDs4%ouQ[ @,οt@(ޟlץc[Dz Ad BB%p1 CN89h@y0p'YRCzʓ9AFE=.nRy'_H_L'7b,I1:?twFƽ!oXG6tܕohq_<f${yj mw_5`\1dKTs6m"dTNvox%wAjF(*>Ki0^ Vxp\=(3hvG >E֙Gwa[S,}9eZ 'sqzyĮZ * ІjviBC$|mߠ4 4eQs&L!WL[M=1u)T5 r2veVXic "2aC6A9eHAI EP ˆ( AQ)E(C*m!q\&0u&;d` =OZ\53.QC"hb! 0lZJ8?΢؟MՊ0Η@Yx{|Gٜ%wqE Gj{2 5\{ WU5gj{Ƒ_1); a?`v RmN2+vm[jVo~OI޾G (>- A%۾3]/EkBI1|g-*161.nzMKfRuq_)D1 ƅь >p &?g[#]*Rptq5=̋Cс^+y2=iܫi; cnQ:ϑaĈάۑxPteFJȵ't;^ Uݿm~/g|@=1,&SgW_OϮn1itʸL?^]v<:";d8 #p"TL~ׯgg֖ ċ lͭ_ve~~.tQSE18L9(-()̼Ǝ?R~[9[ 0ʟɚf1D^2S@kīa1VnϪ߰*?/a+kp%fŦ9\j#G2R=E+E7*?^~SB#˚1/,^b v֬9vE. )L 0^xG W5|1{$y&~.w6qV\A>߆0Y=,gfN_4Jv|ٜ[=φg{|UXOk-Un.XGDKlJZy7C[J1H11r1#|ޭ< OncXwnE6b;>ӷumq&hF>]!n}[ք1g; os4ZيoX?-RyWPu r_<4';Ӕ` 87uCgylWˋpp/PPm EiM7IkBo￵5å̳b֟YI` U$ aJM֑s+VY Bծė7^U/uFS=ct 5 ~uHHSﭫRrV+F֬zi ^+:="Ƙ&g켹+ժs J6z8}Zq>I:)`5;\kZ03.yI i` 0a '\c7B/ - ZZ[= m| sDArn.WPZZj:V9J5"4+bGe6 ,NTp ^ɂ/^!V88YKIRK'x;P o5JfBpc8 b;&H; pWCw0M<5 7d`T}bf=u I=o`$J"@)Ҏ 1}=id]cϴƠb#K%dB78v aCyv:Ã#TdH?Ms1GS:4M{|6r\4Alj:dNӏiq#+cé1FD,Q=]?9c១)ӱگpz|!rVm_ ?՞IO,Li.dIvި.[Flkv3Ƣ!7zy]?cHc@9>Xi8J01c5x.'PBK*j#B(mE+[*gWcghP,ICYج9ڢɅik6 sn6 9*\G$x/&[#&c2VtүC6dxdh49,ܢ[P2.SX,#q(\MgZO| g"d* 2hUKe_Hm$5]ÃSV(@Xa6r{ԀnBD-V@ˊnbi>]ճEL3*ҁ2MqIBntkԐMW*ۏ/evatcce  e}VKosx2Re}r<"}Zzӎ Ul"q71Z:޷H`>[\Lq9j*{rc+I%KIgj5)(r_9z2ŗS0fɌn>r=瘖`"E$ְyeԉ㎐R\iՈ W9xnATyr #uV/"'& n*a M@դ݊-(̻GtFQ'Fެq9B㑶h󜬠1NN1R%Ӆ&>ڗSqb ,,Ir2B smϧBƭ'n*QQmCk'I{Bh=~>=\6Vb}݉5:pAI-SJA93  uT1n5{o|mahFBhxyW7V`~LCC}l4r1fA.XKIyLIք"tJMf+D-/l yF|Y*\ϴ\|g'ZDPFB\y$^IHlE(2UےpXgl":G=OoG: O۹! h;t|tYd%HZk0/08 iS`RyS%Me>QZ|<]o9)xdȁ%ΥYu5F,&1}as;=IN~&&dBczhԀsêdBT6*W&,ICPDͦ(rThTR0N62'[$Zk"()'ss0)8k@>) |ˌzB]j;œrII1y<?cvgRSunHp5\/ @CX`Mc5uMZq]+Τu&-|*@;;݀1dO<#{=Bۀ^yaA8f69%k~CrAOeZ2YmBTd6`s>1]Kf”Ze 8--z[A ̱ (/:퍙AhfkFz]CP^30&n.g D,En<LTAe@UJrf̭htWEz݌1N6.F[YC$uM8NUq&.F#b0)sו|┓= |[.g]Ż[~lwM߭)H?>|)ثOߺee j̖d\^$UۄwB/IAș k0뻇oO%C5[JR K Tn/&1{ rI)v2. ]L4@/iCsUըw Lˢ'ʈD=?tp_BЇҡ_z#!g`f|k@e2BQ!;"$'w6am4G:FNj4B|U䊢I#ekf1RK q*-9lIwmaQ)hPcm#|ÉpAh-B3׌h@3Nf|>\yzv->pE-/, ޓ/{"̽XF2lglܒv땽{*蚅rV+F֬۷aL DMDJ{G-W1`8(x`t!~F +  Հ4Q 1^Y$,..d"g)BzɼPK(ƩM-65CpXmJ65][)\Z=gcsb_*RBF;eƉ(5'G,-~a7&jAsR(T|uw{[,17E#**2΄7n^3F3 Qq!PX&֤!)Cc\ y 1 8KHr[2+`zܟo RvkO)?Һ N{|8и[^CR4#K K#qgr2ޮ%7O¹w%c.- Efc$3ƒ! I^l/77I Y`2U2B)$sN<=v1%fFq#B9i n`:t>K<2L-BLTL5%i'\kO5*DzU#<)oh7Iz$ptLڷ_VޭFF˗ "fZROa_ê|7&P =c,r ;m\!lDP 3)}͢}"⃒NvёrN>\1$0h|NA+?qCJ~JZ20%=zic 4'I)ӬÇU) `H#KcMc$*iSe&bM둛!+:T,HyW/ucAP^f ^g{pg'#l-O b;+CkѢ#kmHٜ,_ !`ڰKB_#b)J%O~ؤ=RZⰧ*a4ᆥ0Bh p0ED Ao-?&>vD| 7֧%T3şW<ߩwR8pIE7sdppRHwogxτ+;nʎʎ*²%ޣh Ϩwu4RDNܱv[fz#BT*D&$Ɛt[AEV3>ESP]4qa`W`&; 1NI"DT$cZ`EJ^{+aE#X֚lnۺ3z-у(u:n)U:ߩu6|T`|%1gl-p4ŧf%K(እnC\a=-E0R?}!y^ȢJ$ϝh ?OHE}׽՛?t0&r#0Gn)0RS0: a%()/ݙ7_>v,)p0'io"5槌ue~X"m^P`G"yOUN'no:s㯓t Ewt_nG#;.:|_uagw;Y>va'Rdv?ͬ!ZE7ipկdOҷruޥ d ۈ1ryN)5 I┧̤g&J Vt$䅋hLש|i7+ Gn4wlݎfu4Sj6$䅋hLqroH5?CFXszEq x?,1Z Ȧ5L}Q--9R\Fnjق;.ĆݩKަr״ Kd~7Gdƕ^ݘuud=pܮA$ KY:7O*VU.MYRg:w~"3ބ3y x] j֠feZى9h~0 'e|"OGP :%v9 T8u.5LʼPz`;Y&QdN-j~:t׀/e< PWFcVHVM%˪p9F~ +muU "u=>PXd$mQSoG_r.Pj`B5QK:grUhq&ܵu~* Z#s5/'ŕ6CvdGдڍIvs)nR\ #4Zg¤1I%FFi TBA"X *Mf7mRd j5QǨ< ÈG\TioSHT }Jβnmj[P=_Q-y{b JVtG*;ɔ5w:\`UvAv" W}kF%idvwJSս9q"$L&fB͊7*v7%GZ vYFolfzTۗ]oăƽ'{~![S:;VKk b;;4VEh{{ېR-ՅsۊM_~tFɸ=Z-IsZA*N>rj\#])ŠIc;rSq+ٶ!!/\Dd*+vF:v+Š蔾cv4]"(hSֆp-)_/XyQOn5駋ޗU3uYY[,s3hu|<˵lOr|%}{85lJmj% ԳOa]}㨗E*&X 7mWq.ok<1<ԯq  qul%A00|z׹]WO?؁l{uwfr}=͌}{>yJ;o2%W,kA]~jMFheZ=f_|ZyvX9{dzz_B gjP3퇾 mN*j~-gκO1l6鶹>-nO0ѣG#BJ2oi*}o?쏯SU3IOc{LHS̨NHjZY L{V6bpKXJ^( Q56;?>\`FK@HW2ʗyNeZgVxDb r Bm%B_ncOt؂Z]%?LmȢS.biL`mOxzTeM];ABz0ln6bÚw<PĞAԆ+bCÿSYDhA#-|zb"#ωdv" 5PD=5Q5iOfd(F"*℃G̃iD2b4xE`/RY)~bgL:Aq-|1 ib+|h0Q:rK(ALPNkR `+ :LQ,7   B v`"N62B 0SG0v F5h/R RgC79#KhU7s)g<`~)ɄȔ*zi1TCZh>#jDk2{爭|4UzW-=4 wr%]u${P8Af>em>e&=Dy)j̗.glM=2n7Ҧ05rw{̭}z[/FOm4*08#vt:ȁz[PTRa'z1/=v m zDp.# 0u C |u0{fd!bŖ->YZKDF>Z&TȽ6f/'Vnkc N+cre`l F`Re/*ٲI T'JG1nug&uq#^-~)tdxԌ*N/ɉezQzBzQd`r>8b<\ݚj3{BEs!LefhK!xhہMBzdak3#Voӷ)0ox| Kjēs6~Bܨ}- ޗm RȾF+brv/jhK4Y_=DZH1K->ӍpZ̓\ V+L0i6xIF #h!N2=T- u]wu8Vu\G–plxVFZQPe!U ,)P&!= K qL<3I$1/=ΛdhdF}7V- Ճ{\&^>iu^2Rlpe?]IX~uq31˪;W%Kߡ {"$>%bߑ\1Law J 3,QPT29 LlUb7Tjbȩk^Y gu+6֐ut4oHhpZʊV^ i6r,LU.TJ)e5q6z<*-C1~{Ͽʑb,`FR,Bp})_;7+LJiV1=?Dg4QVe:|Ix7cs+7Lb2eK){p*eV^(iTƢLh:P&A(ک"T؉2aLIN[H{aqOk)Bx!IN-70ea-KiXCxiZ1E+S@}x'Ix#FsjEƃV' )E)`K댑.p87ԙ҉`-DBߊ6[yR /U/qx$-Z8wnEVkC"p#+N@ae Qk*c;kHL)D}487T[w # g݉-}LwARNp+G}1nGT/dSͮ{ 9ƕ^#( t0#Q  !U7_b. !# ߺO9 N!Sf`NTPSݪ80`+̀Xf (JKp%s) Dd+oiY9U&`%*6jyݧVN2uk777IE P1 3wZDN ^g]]ʁt:U WЮ}-.a(_. QZА}eH79]G~P"բSK~%JITĥ!$T(I0^%o$qB^%%"BJP ӂBr%4?n`D]*pLRr`19-/s(x(,g)Qh/HNUeSIK?2jUHe؈2j=`"!pZ0j(vcMPT[J)5h`G`#h%-Ƙ/![LŴ~'{XH( Klv}cX2ho#gB O]rGGt |r#(;um{FX {ja) 炳 lsuJvm5̀1Z:vbKϚxntY|SKWta]nźy6~Dh!s{1q&[KV|.`L Wg wz2R㫛N*'Ӷ``l=tI 뺛TrwͅĻ$޽zIFPs%K5Il3]JƵj"OvJqS- \+O P($R+=PR{vS&KV?6ſeE q%S7Y^Dvu _s}3ERCڜ[b= %wo0G)#VqL3CR-B x{P%~Ͷ?o='2Š>sdk!lۻp6Z`2ֵMEs$<5p  SUj,hzgD`W'aV.hR %@ C [zJu1 PFf@[ޥd:t3P "Xl8}"yj*T){>WDgsA2Z•,XJS !PP\4h.K-A8|7ZH B??Bݺ>%|<7hb> K4ֺA7KBKb/8 "ZMrK2`WR8 < h#G^#F{@ %{|΂Jˁ*{|eA6Vqm)džJ: O0 fOkfhi. ؍U6^#4wa7\*F1g,N ԔʸJpȐwOCuF()׉^' Tr /%80\lwv06ww ʽ/zcGwu㍗t78#e5T.M^9qZڲ͎BU%< czfx{GDl>wêS1&dGlfv8qsʼnY"prf->ٛU+qsY(:܈m+H狶$̜- ?{.ۇ>_E$~zPn˽w['d-iwW_ym y]jW,m`3֋רkk Pzhmu|SJv_ ÆP}]bR *jT՜;gѓ#FI&՜W5ѵX+؊De3q[[/usWn‚Ȱp~yו]{?Fݠ !0ڰ䏐 ElD E@6.Q EV~eWp> c{J26/nBwg&PckK (Gա8$n7S&7S~1x0Z|1S{/Vڭ){#hZMNαOqԚ9"MQ@"5 sĶz(9%bp"`t1Pn]C=Afp(fsc9ݡz1?҆Jɰf~x~@ Nc*mT:[f`KG܌aHL4 .|KL" fqI|W VJ##qqwZARlNC``Ep:GAUqZ]&ˑZ:U6+"a*n| ƛ 0RY!˙2v)@9thBsMECV*YT+'-qfl4iNSZ+H2Z %UhbΚ{+n#t[-36L6@4~J?$p䀹5<y#:=>*,-+1X~ţYl:Lݗ%)6'%ՔҔZSYQWÚJ]ci/ T|*T,sԀj&~**="7KnSe=eD,eȕcQOU}p5ةc Sjݕ'|֟l*6tT&G&AO=ֲGAbCgCXPj(oץ A,HK՟Y~>RoO]9 w7WwE<7\MՌ 6Ot0;k0,FoUY7B((wrx[lޟI Z%eMB}yU Ti1UWl w.gQ@ AmXYTkHB^[ڍRl[(>GvuVzj&$䕋2%G{껮N&؝:p$ 8lrtŪC %@5dlh2ImV)!Jq i-RE;xf\bOÕlhIEV]cU}Ռ6gJf:\v'6TWhG8k=TϞ/. ݼc?%^>;y^{6m-ށ e|og'1(9|oӫOU8gYUBN,gϺ %mAjVեn^=.hHzϧk4bE$bw`Rj:_n-iv߬CLnZ/v*6& k +Y=zW?*ׁVmv%ȸ5q6Xo_%mSaռuᨱ¼a;2D0ڗ׸hiPӗ?\﬿DTK HS)8:< a^Jwq vKP.TtD(X[t91|Ӣʟ -B\(/].tz8h/6w续-}R(',6'"NŖҌ"bQ,UƩqFXJ-Lf:D< u5u1FyZAhGWE4Cl#VE݌4!p)",5չ}g8rq)QY1 BY*:!8C1288QCHɹIr5d l6 `%3P({>VePYƪZ1;峧*TCRb<]50l24נe)U  z0KBRq;]sZ˂T\;W#VJ s7!Xj$H+L ,0/t?5:($&ѩ2&(uìO_OX`"m49$ɚ]7kך|Ԭ^r%=U^HXkD?_ݹd|aho]]^ءEob^b-{C8Fb8G nze#"n:yC#Ӆ*ƌ+Q!(aH0|xpk~矮/.?Ǐqr"^p;dka ޯKi!|SԱ.{Y$ܻoV?#~FNO,SFR$52fpI)s&4r MR+3b[A6vdp91tRhd,P$%J NPf93 ihY)#D"`%9[V:h $Q蝳~>uPQ O!OȀXDSV,,"1*6'0 p,Sd*.b)9;d21d3˭EȤuiNX+Ad3IfbnaE<"C4줔bH3gTt r b)S~h℻-!hiJ=m%28DaP_&#[d hZ9JY!IZm 0H+`f0.;#05揀b\9.O.t}n ([r{EYěGADDGS|l_g'=M&@|_;x6n<>~֏O)< F˗BշwH!]оntcUNTp!BmIBW iVJQǢC6Wѐ1ƵEecHlM3%ZuKx~FIEootKVmI?oD X!``2W:a[':dج(VoK-|j͸bi|՝G©XE6blhP$Rw(5Q&/ruyi\)W9:64Mbdi&U(a/*ZXaF2plAsc{ 't%*UzcE{?xphlHazg&w[\<{Ky]&R @U<y[oD{a^C/Cn;y4o8Z԰PWg5E}y[h4f=Y!:y&ȳe9x|'wΥEG`Ysߪ6[զG^զGY$d_vv=ESOUw ;"XW]<oq*Y8:80 Ӧͣm( @˗ۭ_ͣ^CCL2uR7Ώ'1 R\,M-aR 8%Fqlm"2$MХKilr'w(Xb1f {xBp3}<rt tLDsexmx|{܌׼>C D%t#&IL& -ǩ@8SFg#NK{`sIRQtd9@w~?ڻvIup)fF)b.-cDeq,69bfb'2HX,1`4T3-Vjg;]Ko>N{I5Fzr1_UygAC#Upm3dUq'xȹ\*?'<QDU'MC,$'.fC9㘪)!bJ3%يC)tG5[q||=;IQjY)kS& M ubZvƥt !W^ېW K-ɡ\kZqGVtq9˞;~I@Fp(}жzyVAC,,\էy+xL[u5B;굻Wk44IBeS5[6& fIw/X;*Wxluy&xt!c\R+!~l/:EdKrWrJˆ&wV Nv[`El.ig\m XF O[H&k&er4KI$R"D ("PV,o; % ڂ1H *m14!~& Iᆛ$R0ݜpʘnX `Gv8aB3LvQK.e@tsd3"f ĚiYI\-֞!Ti3j0Bf ǂhEAװ БFH,iR5,NM:(!KƉ]ܦn2pZj 6.y)@=!C \Rv06z|Y__4P_=mļP'|2-UiBˢLƦ5-_'2*,}ĖlMc ^w?tkdegl2߮z!BNj?ib6] ,M^6彾dfZT=fD˸;Et &)wwNQwcu[zp ŭ\+];5X($O,<>dփo'|'mwXAC,Dcܜ Gt}*CۈvQP(xoeS[G9 Xf+p_ wXIA;h19rKvޕ]kq=8:/D\(q:@!BY*:!8C128¹Y&d ƥXBk85cg`TiQ+RDђ=:Dp_ hHɳ/k>AF.@5EtOG\g_v=jhG2_6%!zW1uFԷ77PtIh]_,*+v#ɬWR#%TTiPao PT-VS D'n)~>:Xp3|,+31-l,ƒ"},J ҋ gErdfW2R^1J/;^R#X]M [ˢsT::ʏ9Sݥ*x*oBB^Ǻ7MC(>Gvii6Զv 7rzj&$䕋hLbOf.qΟfŒr!p^g}WlN'okj8lEgQ (HF9 "K3ȚH x5T?׀f-E ր'Ǹ5Z-&yE {;Pu(Dq|ɸĺaHJ4-3 )p>-R0sCJ 屼)#!ֲ\ףT RM%9>~~cZ{oܰUIbg^@ϐk$SwCc~&;n"Š_͜EA}3d4YGFDNrvȖraD~]25Ep wvQlWq X8[zrj%O1޺ r^Nef eݟD1r;|cT>|,Z0#eT&Lpb8Bd< Fip8E$޳pEe6ō}jp\r2Q]  X‰nn:ZVKL&z!ZEsvzi $"!yjVst_ /θ+f=t)!i܏0 N[+ҹ;͋rs6鿢O _\5뤮e'@E޻?FDJ@P-(t4R𜔈FP cLHl P3 󜫒Ě!M5Y$\V#̐sz?.C7 .!BH0Di(Z֕A. XܴP v % 0\V (BK Bi]nW4kdhE˱ }FsRD`Z6e5M.lW4YkG#;vIr[Qmλ!`^ Ȉd M9^G7ad`!ZHkǥ|uHRB2'Ԡ,S̮B؍4*kxV:k;IIX6c"yeA87d= \"ϙZQwnܱ|Ƕ͡D:/)C`=~$ \=BbH0P\x$`ay~a.vb LGϳg'Q1naodVs;wgǼ<vaC/r.^" { l@'~W_y X([h(0C9PFIb`.&ϋR!s 6ـ}ɦK!T\T j]šQ^د رr!sJL^Bb,R {mb~wԹ1!3pbĀ27 S$PS.\*M9T$q1.>;V0/~X,+ǯڝ&գeD=8BG_ߣJwo -Lџv]ǩؙ̯7_&o0xS3QWy^!^!owf97~5u7֛`N_uPB Mo*[í``!na u*v ƭf SMctqnfzm"n5s9F%]"o'^F[mJf2Y,Q^fv;_,b1A~j9Kt1zCu}ayud7L/! -u= "u\<2c`Q,v?F1/> F|$@rXOݥaiBTذUmc+I5J J$[28t`ARp:XJc_y^SvPIaP Jˀ 97%!@SŨ˹)`r  K(^i5EஙS cq8gD$2T2҂Is R2GqhA$DOԪ5KMF%))˂µD F %LP4FʹGHmudet8^n;(ov«?#H,!\hC7rFéэϠt#]XnQ6un I0A>w;ڄ3 Yo-EC[hMI,DE-)ܗuIXevf Hɾ\fn8n^BP91j^*RI Go'MX^E<@:JXLQ a݁9k{pܸ}lz뜙|[mܿ~=R_ոʅz(lS^ܚEno-FNVfi`Y4-ĵ,qW, %Q"h[ S".kZ09Y4jcVǐIP$'Nځn iWwDrB I_nƓg ^ T0N0M?.B[)1KIcHiNLԡC((V2! \+PV@~4ZɊ"+Wdgc[d7tP3SZ̪^V &ͮN&YB(!ED [QljTnF1㚃}D_@" 2,5"U`[CDPEWc4+p rLc-J9f6𧻓gz_/>UlBrHҌ0-/P-)PF DHI4[8euˀԲЮEkIQ J Ǚ]3$CUOG5gf?V{Zi9Vcw3Dd,.::.B}2ImG5\4CɌ$e\.]hZj>avH~~[jҌU$Tٱ^7xUH.z/_c!˜$fzꈐ%p6Bi%=zy1g֝ /&t }lT>öxU m wsAc+QzLo!V72&4`3.h9Wl?V@~ 2s-lmm ".' 0&D=} <66ٲ,,mx2ټKC 1:}gC\y[uH$1bkնuѪA[ӪA[M ڶ~+kVmhTޅoONzV!a u!ݛ_zd ):KireI?<Gњ].^ILGO)`:"˔z$Z#4[a L3.^QOXEA)j:D:98 uRǛU#6[Cq48g>I _fXBH8\Iq! g-BM Ι/M]Ӯ &Y_hB 60D7Nn?A 8"]ug=] O6Ep3')1Ek:çw{R h:Gw~Nd]QRpZZw1l/3w tǧv3Ia^uݜ-u/ ?^Qvg,%w/ݼx6Wx́թu3}pGAW!1̝6ߊߗK)_b2 OYnQ6E^׏3zX BL'mA)7ޭ MtXzLEqAPt@pSk#=i#24[=F>׿+XD nO0ůãzT9s2N¸67[}pN7qs9)j>zz4OeWs)angi[.ۈ'/װ6 CT-vcc5aS1\CٻXm@[e^]5toFc B"uMX 1k]%0v_cu)WUn I]u2焧(AʔAH;F#99&"a_z@ĬOף}ܹkC4<8kNzY鹬X}LR}x+o>U 5 z_4XHIft&Eh9l. ] _&%܌_b&E>Ap uo(㰢xCk7enbrJM~{Sؑc[H!6 R>` 0kor}/Fqcُ(RеQ!kc2xOZd$88=ps@'"tbӢX\tU}EF_(2ZTF8+ st 9̕'<:u+U m4m\y]׵O6A1^RI:~m7}V˖f0E2PwoZhc@􌶭G.;tDn F9|2Mln}u2J#)gev336ށo)9K^}on*Q Ĕw."괭]aP2Da,pSEb)e``\ gV! -fDaJ, _'cnbH@ )k Y`xP,WkDAe/Co/ʼM .^FQ#,@T"k Y98jEisnTch$z^N+@X(Y+-"ja$9D*H2&Gp NԨõ=AtRWh7v^j[d^pp5WD1bU$il l A\9t GDIAJ>^j~c~C>ӺC6޾C('Y0, 9ÅM,$'piΨS.$@3i$HMHfCj.P9bS*/*;yDgjuhI:D&oX7 q?j&bU4Jj;g)YFתACp=^|\Xtx:Pn}"['B ,^2sX@R!&C q o]g5)7aC|,~ ~;[/_ ,6 k[,? Ov}`h?^s)獴]q$˷ں0Dp2ҋZ vJC ]H蜶`huXĻtLݗ gFG;YWn)yo{{jm>j:ލ4־[H`snvz^ߖ]pQgNm[Xy%tFdUH,efq~'שi^xUr{o"{Śi4˞I])u-ryz8}$0'-໐LQw!^I +Qiv$CF8醶l]AutP)5^O`ڜU ?xяݚC ΨU:#F,We@Bg&T*tafֈgfk6P|Ŋ+Q+i"PՏKƞ'-p\("ש82J[:T4r\yV:*DcZJ#d{闓 l!i[kׯ?ݨᰣSGICMd`b3!d0,+|^ Κۋ<c#V/>Vۊ<#MM+P>*s! P!i8.k{o}\w{}dPH~*^ i<aSϫ1!yg/˚?/!ʚ3u?A9z?{mdJܔY9oWnE<M4 f?v. }t5ƁX7c&UJ dߑ#=2#=0GLi@)']?BIb%jAN0$ϡ(DD+E //G)y[܊:.)RQJ|\/?x8 @T쵳`O㻋A1|i4|v;#5}tcplצ ^='>R<'藾qLP7~|'L/Ji1)O~J 0ΙP@Q.4>2P2 7t`e(# 2NPRҧ߫ځjn۴gw#|<>|*۾Ń_b yuY= 7xJfѝH]yz) t5 0Ȭh 1H Y( Εژ d ۮ.umG"#n·=UcTV0 B ,ɍDY8G\**,NJ2b8CMp\ʲz]_"S^"&^D/Z(e +_dB[w*I馀lje$yU7_[yvXK2iem,̰EFᙂgZZ!Ls6%o^+QxT^3&50JK7[k(d)Sga$n@]P P7Ӓ…s ?GQ$D \5ɱO%!>7|DTñ 9'![ AF 7X )2[t`'/[wQGk3 ?cX̖[]X‰Sa &S/q^3;/?ubfꯓew2kჺm-:EBteSI]I;8qݘi&nz&NHܲ{p iS# v+? ,0 gwDV)lRJ|Ehc,5u$ D-h#"EZ^ĢD푚gޒESƛ_0 G㢸  ]۠ Rr"撗sĸvGWrrLYT.5ۡYܓ 2!ktd'۞mmBg?y7;ɍ{-4|'M[gin$Q|xeVd=:ikwA`MțvL޴ cCS=?)ʬDM`a`hףMI4@$A{Dpzˆ6RO|Ry?O牷<6uڷ_@{H,VwDJt3W ҟ^bԟa<&) r@{ Εj^@ i0E)4rs+9)p&"^pcj6ڵ7پ^_:2B5 '5nӂ+3ʐтb&/YQT ̌ p@ ԸH;U(3Q)wd+ s~]IIyxz{!5ů0ܕO:s+'EJ_EdE!_xOA]:?U'oMqv?ȸ%{M׋㿯&̻/RۘK;ֿLs")+yl_$5~,*ykjZzeN-7ەf0מeuggm5i):ab:mTn,-ZZ:4 W5Lf%RVEX@J}1t=" XX m!0('{*A mu.1haz2gx˜Qd6cD.n v;$rAJ0G9H1 G `!)9bP6(1R ] ce=9u0y`{ѡ{ѡ1qE Fñѡs1j(v%GT!U`놼q7Q F^X9O4, \Ui[mm .ଏc.=Bc\Ut4Q3.+طX1$[\HζiU΢Pi;20YeB葁 JƔ(.rb[5>3e 0Ԇ330%NH]UPfb ;)Y5f]j[aP*11mT K0tO!2L U4In-p֍):nĨNjU[^ F,״uKhuCCp]ө86 8}ql1oF"pjC}dS7>& uFZ߉u{7 ]?</3#̺aR5p-c8Βa/3f>Y5v~ng8VAo"8jFEP'PGEIAnC0 OsO?;Ob =(zҥi<ؿk4Q~]Bn9΂bdb݁mMO>6|~gxj6'?^{q¨z9̹l_"KMܴ)>[xL|dS8|v~]/#{[Wr,i49Ipma&>o.OONj-APjvBlgY8|=3͇N2b4DЌP`3PVP)fYe1#-ȉivkvcs{~}ĭ*|mc A\smLU;bd@C_ˇ,/}w,m|YJ Fb+d=[.My6QhҳmELxC,,RE!#Xy.-g?=l:ST;E,HNUEgo}Q^J'T2ʳwd^q.AI Tl 7I.acN3rLU*m:Tk  ii,qa4akFj> f=,vÞvaIvaukf< *^dS %eeE|A#HdMm+*!p#vlC}xkTJpzTe0r82V-0͔5',-c~ߏ @jEg rC=%ȱ 6.SgD^r>6ChPС_~&<vp!Of UAۧ^+GFȭׇ}cͿnmOL[(]qHHAyikLm&xaA}<CQKv^z̶D9g)|;P{JLZ'w  Mww.A9=Û4mRfcC"kJ&*lWV).(LDuDd3 k2UӆCTr:b ճ]% 1?\2Ƒ9tYָ}/zO7 n|ᯉ4:Z+X㰪֨ >ȫpJ)O50]t0ZҎ'Ul!٦R"ӢhtZ$eF"T:݃|qE7oطIa)"zCljt38u{8y)u[G{ƬJʥ:r8k9':ڼt;2tď3EꃽTӄNdzp>6WǕ$>>U\+f*33װ}V!Gg '˫+k{a _?R.؎cDefvf}+3uϭE4nWO,*V R#/b e+M8Eowd;R Q[Fv7H]C*,WK쑺)hT)ƮrݐcegCpUb`/pDȵ.G,pr2thymzn|"FI*?eh/˶ EbhZ۷%ϰI&ot"Qif=.(m,\ҬZ4yj@#R'-gfSl]Ҥ3/'>2Eʀ2"IQ7EJp\!8nR4own:랧tOvȟxuп?nmnxAl=V(Px\?_~_w/ ?-r@jVyаᣪ3 3p 4r @ngɷZ3SNKQ', _)O*))ʛ,[tG/v4r/<9 h;SNeCC4JK򔡜/]u1c\"Hjt3^VHu`%vN;aOlLFfR.35Α~F8.RRHQɮ"6$|J]@hBׂZ!8>YF?vu:^%:`F{t4I`AGFw*KM "OPvH Ɋ*xcm_P^&m,iޯB:Ieiϡ-‘uÞo]yd Ь ,Gx:χ±#R2zunCĶSG1}N&=ޜ8{>?6]m~hA>-O|.6-߾yt߽1ǷRJ/qlu}y{oCju{|C;|ߕ'Z!s|x^_uz?u7_ {4w`Om%9w,Ml9 }V;2o0ͮ-LaֵT[B׋Dzsx'duNtKa[oٙCLهv%{n3MeN:y4wb$e]:cׁlzW/e1Nw_-~@%:*}C ^x_3&,zT%l?68RtQ2p-N> A(S06s?zd\"QWq׫{86snoͥn!6;)6v^h0~ps~5]n,Kw`(׵Y!lDzBFR`TCa?v|khmUc$%Rhͦ3eRk2ߑҘ%'DAywKPj`C&l%Z$ ,;sG\R2E2d~9zN? C,u*U󆂃HIyke" c& "EBbMnp8KͣΔD6(ŜNۈm1F.Dk8 P[RW|Q8](vF9 (u Z圓 uY @ڑl5Kֈ؊gfqGjzVW^!}ɮރCN8^ij?Q# U+ =iԂ ƻSvM-}n6ٍ4Ù Yqbo>NWk]׋`3ܘ~UvR-fjx۪A=;s]}T6)p8F<)Jϥ(-%] 5C}ur^E qvCv3HlcQw|x[l2%b)&FYu+`bej MR)yZ$^i_fjҾZ壬Z e솃*IX/T fad\6aDy4NO5<sԆߜ/;4(hy\"9YoWȮE7^uJ`yfdx魉GvU&s˧wV#|s{Zd*bwdf‘Fv'ղʂ )Wi݈'g{|Y2K*0jEHؐ,9'M2kʃG+k8#|݁`88Bda@ҫEV%Ѷ1 Awu': $ZZʐC k N v>JY^Ҝ҃*iˡ,1eiVVR=JZ!x>vV(c,ۚƓZi+5G_JnQoEIE~.Reޥf{`yvpCÁHQ(kk"Prї硚 P|]\9XfxhS=D2#N/7klշ??l~?^@Yim.&'pf_g%#%߲[P^<&%<KO7{7sKMs̈bmL3'^f3f>k8HF34eb% ]V@ۃy=KHZ&,TBc%x=(E'@*`ٰHZQ9 YD#R/C s2EC9E5+6椤ʊdףKF#8#$e&#{BlU!j՝'ZAu  m9 3G^3@hczH,>@>r:*kD%kl|_ ^" 4N>)[8Ehpj Np0Bn<.r3qP/9v|_?f(ph֥/ #atrYCIQn85lAa% BZi_k;I=)ؕk7H9V^ԁq\:tW ]\N1B[9ۯ _2).~/BX8O=/?_VGp%KRc(nX#uI?}w3DDS'Dܩ~9"RbhKY?7{7W'5u ;M ℰ*I׮v9ϟW>}YEW' &Ƙ6djSLs^Ro{s76\o ~3@y);T^EJ5-@hCeEE%oEygGyRҜh4-%VH8m#Uk%'8zZ7R*VDyK޵?]Et;L~w7Z3!8/C$!bf0xB3'GFҊrblg!;u_|j3][r+,ViO9'@ 0ګKvW=ř;eO\$-`XEVէ'a9){&0^%C4QNxky{^G%df6HjCDe[Jig {c]o弱@c)&+e=Ơ[_~ْ&=`Ekm Ze+?xj[W8{ 0rdVʰw p\OM jn9ߔ\)|gE!3O)?oo^9#>< -8EZ&C9ڥd4d#ӾԷ*,VIkǗ1M^,M^!\2eFD;[u'Om4(` e9 D׆WG|7t,zإ]{jla1C~A"ܩqIQ:XJ̹$qhQĩVAT %5CzgɸkSLVgܵeq&Кnc?3wqo߽Et8K ^-k;䣵4Ym>'AQߟN.\ oųcžt) W`Y KݠsNc_m9WcH=?F*=6kO/~nn\Guպln zuExFDB^sv!^Ack q.x]6KD!bLf$}P'ļk0[!>A Zh|~ K0;k&b>C}s8D\&X?|`* 3_j FjJN܆qNHz{h`5ON2![hx Bf` '+~.O&|>|òd$G\s$j*+cՒ-!J](NbT1ʾ̤\L1SB"V4ʨvT\$?*Rlr*FY$?s`RvmI[*7{Y7J27[VQ<0AYMCU";92EoWp7W[o:SZ۽__d+7+ `% bs )XW)"qՔh6 e&n6r`w|+_n6ӻy2hN Bͦr$~yĆ5E:Ybhێ]Hc-GF`:{^,ȯ;>_ݾB Hǁ_@Iύ+Hr m)!d$3~ž4C'Dɐ+P~"P[ l}= [ח}z>r4&4m:5EZMvu Mq2Q/^ڊHr{'6wKV(ܕgspN~M ,3|}~ڹvk/vk{i"į`"$lϡ5P&ʇS{:\:Ny#6NDS j zo;Th6?Č!7(T[" 'nޥvZË{/k\n޾+ST_eк@n.}'rHۻNz!lO?2ܔ7wȇ<=ޠ1?X}lI/H,7Y6]Ex21{9Jym MtM0nzwN6S]3mF#f^n X7nm :?A}Ο8=埶J2l]2߷;gS(sj襍DUCz1^3M.OO7~ժ?ZA*6U[n߭Tg8&+{?`HPpwWĩOTP,|,Z+ݨ~eYp^ןlvno^y>eT}nyA>K)IgPtQ _%%³T2>9Jd&Hoo~k>OJY3}9ܸ|ݿ̃UHoΝM·/)D L$!(E]/d5yG}bUQ0/_4O))rJ,+KV1N1٫M 0_ϷDVKRLi=xv&}W~T},zNX3sI:vA{яOO7,uHCYm_:BdP;|$k|NW)VC8hq)3a4jwɩ^s "hEpgq}&j9ꐓ4W>'7R?M `@F!FAς,,+e}MN6ᤱ] a֚\0E~"SfeALNf3ߩM f2 gs^X pe{Od^viy ۤ3rǨ}bYT+L`#J9ZE,29F&有-UK^MI^+6W7ГS `vi(MZǧ]5Ȥ\Kɚ&O\aQ5*SĊz]ɰ7Ym;rI!/I.c-G~eNF9/q Ju$6_u45Q%xٷ3d%UdYZQUyբQ$K,}@5QV`ElZ pQEʗx}ЎIN"e &y %*IDZ|_I^$?V sK^ ΠjhˬMT=U!jHb Jm@1]!V4lE]ı vmtA M "`sF*$/jd)` !de4{_TazթfhQARZ FJbB%o2g!L3erqS RU]MSQ_JLY~$A5IT3QPE:"#XJEEGW + J%FT]1>p5J/>z#$ Ka%0VRšX1붩%P evQS&Jڸ8K\f9[9K %՘lq\eKЇj"aJ IZ_|k7YI?%\89k"˫k уYg2 Q`Gp*ZJR\k S{R?}L6RePlI޽r}y잻~B`WXqbYIiΠ6iRT~i3 ji't܋~{)OϮayew {7;e2a^9s o[:B jiОKu zfcv?j[m7Ydy+c<Z-"߃]ĿcbJM=< *N+k̞VN}僯̻\|1/zC 8c uM\̛57zLb硳U4-E_ N kml^c+?徬Ym.۸qky7.Rw7MZ$3CW;a/cy|lT"6p\tn:T{МݸX;\}# {l deӌƹ{ze 26)xƂau41Hv=E3cV_oY_ 5gO?Lrob's TθRք=$oT&rqdduoQ<}u:frt= 㮷zT ANg-I%2O1Q"ws6;V?vYs׊uQ`u1:rVA0f7PGoi^f<ě͞s83ѵ(Ί"WzK(LދXVHE,_3uZYN6v7[R45Uփ[#RRbH,xWi@0f}dEZ3Bfe=/C&~ РllbCh) g*rt| #Wq] b'H )5L ̹'L\2'hoD WVn/5z4w!ۇAhTLCPh.i=%h_q7eR^FL]P;4]mJ3[h( LNygYAhZ`P5'6mi{M[fÇ2?X[Ouff{ dH4?b>`I;X`eUr= J2S䴤f3p0=?*x =OdkϺ4J43Rc42G̶nܳ4!KhmCU6RYJ<^j.ȑ!OoVd"SbԪK(pn%/S続T-~Rْ4a|0)ϻYTaEly@]oӔƑ Tcii%h%8zjhڴW;u(*k]]T3By.d'8!y  {Ɵ﮾VûɍkXEBVBO=Y߬^Q\|OD)*{V˯3>oJ\1g:i AIwdLd/S =XӎY]X. E%UuG)c:enP*rtјO琕2\= =KQy,;b=vriJ ct!WYr[  ʑBQScėLѶH?򕊹NPR7[ߴFwMۣa ;ov)"L7c:ۤa =eC{YL\?oYvB/X2N$r\xoQWqZ,mQIG&H{iHĴ81 e*+/4 * ;rD12ۨC'Fc* ʊ>>﯋ijטTzN) nCcV;vgA"_5яcYsSnU$*ם %mqݐ>³NZǛDSpI4Gy V F M|c9ДĞ`Hxy" g\p| #qvq"FJ9c=mg]iw!vSUvLɰ3N2:a?ÉXôcGp܎BrT43btDIS` >P&2DJ`. L dleeݽ/]n,9ӢJ\7G|J]ʫYŗ&VLy@֭O\YsKF?X橨KW^5h0rv ߴ0OW&fmqj_)qゔw i39L0T!{whZ $tI2AB9|,GiԦ 6XE'd8ӳ{qr<_!|^y&䙧kH&d;5gD}鳮6'5Π_xFuaz9xfnN_)t4AYIfx4WiY[<5A1$DhF6(K,׺w^V3ٗŁ078|GK-8Sz, HxNS=7^B{.!PqB/aX%R[Ǩ’LCS_X(ki@ p-{NHRhKŹu0Rxb8>HmGl/A v7r0ۏO!.φ^Il!եҦdNYyTyL.uR+q9;WPj2 pҜJ./o?\[?ӤFMQ]rf?Cp+]t+\۫*||]&ׂUŌ ~?~v%En[ ighf&5d(:dj}"Cbd @ʼ^L[vNl54"iDPt0):'{Gk]3Y'z8!sqS^aԨ)p]43^j/:'#̇Nٶ5`3r{o;OQ6^jjd:M9RbWZHpZzAP4΢`c4w8)Fϝ2.. <:u1]g⷇ U7X^@'Y P4g´sWʧB (9*Mx4M4. -ʧN>zކfjlpޕF#鿒xGzٞbü̢*{eiOoP.+/(l jW$F|`0Qghfxum^> )WLvs#SuyYTi+‡CTfD̹Xk)Sِ.RA9{KG8Nft IAZ+QX(eỲddܪ ʥ. c t NQ*@);t>Z)ޤf)TU]\PU_uldKa*Ǡh 7jƎv@ezsN^\;0ED!Eb V;wFW$8cmvyK9vD[/{E#ѫf,0 pK"s.mfvĚyڧC ݵNEBth`wI3_Uxj,x%[ B/x`h d 6gfz^:J*6Y'd?z8l mf J]!J^[kB*].TLEX˼6ZVյBҸ-Sgcs ~X-SU=kF曋k}oWU(8~0('0U^[k `LWbLd0yi檖[5JD0ƾxF T#. pѲƑ zr(So lѡ l$VhVW^h -P$D7(4 \A{h]7vbm%WuAR4vnۏsPfI,RsZYjYm75zl K3BmodٻhP`twh{쥆Y ~m.}ްe#\Z6Y [kcR9:uΪZ;2w-BՕV% &q8$Qq|KFo<=т5&b,0=0gM Q59lpoq$C@X8jF3Ϊ# dIqS>+K5b$>a'ĒvfNd ܪ_eWL=_w15RiX6t2psKP]yM!K 5 2 乒/+bnBgzd'w *MF>zɐ,KR2BAS0atgaTx[3b;Tf83g- Ĩ4=N,}?5-,d6ksyͦƯqhs$_|%Ku U^֧sH.9zZSS WOzwnC#6Wajk>ױpIƚtitP%G9ýST9T>5T-@+ NTMVOlboo1>s?gҜ%r*"J!ܰ'A*yo tI$ G @Tq:䊿yʛ7PRݐƓ=wu GЬۨ{dvVBL"k< AHE7}kQh:bHQrvꖞi f>MТnh/<]r ,vr~.oM'~m[Y9A}ٿu($ZU S_OnMs.LZC Z6ܽcu x" &ǦTWCNAFw;cEn[ &˦ZMS8Փn}ub:}ιЌѼ[oAC[ &kSٔKais^_y kH,5*(RHzu̶ǩ@ fwSee`KAv{a= BI+Dem誮3؄ķQfx6ΦWS&uOׇ9AvkZCpҞੴ'0h@6BBTq%y6;I+I wx R~2<,C^t4OvZr߮~aIMvdZn\CR?>#=?~]hƾ6{ >n(qq8ZIf!mBFՖ%}ͭwQ;v ?_>Sqr,&UuSpdi}qEmA}u0 WJ]v8u -]/{K!Xx8'߶+noס#د8ri趕H@ݩs۬׭>`V_S7х8 ĥp S^xMF i&3ezsJw/H<;$ xHR $EPnmf֙i\ 4Kƌ=#t9)3q|w&Fi!o]8K+HEgX,E"4އ q~?_.+{8Q?v.华1` Z"sZ\؉0^E#~vICah_w Łi``1D` kP |m rggrpHwEʄu6Fw}0ZB*ZeN- Jk<),U@ WVʢdX2KXZ&UQBBI hJYRʕEnjZ;yYڒ**Ri JEǙ%>I݄S`ڗXk ֗[a <>'T(Ǖ[#UGCFqoec/=J٢K< gUcy7yVkK_UEd?]|`$U]]yh5;rz}I JMCr!,=Mra{JY)=VszU;:@`[phfc /n[v(p }J%rRtTvT£aq:1bEM)AjQSf0 XE{OܞFVuuC^ɭ6h|׬i*\?o?!<}~'ϋ" tt'ywۇſ5rd`ӥ˯חWʯnDḡ<ʽV>mկ_(Ytw5F'O^œ\qTe/AsWu&xgd}dJ]>G߼za$d.ƀ;h4wI}Q-]/ZN3T4Iz}~6u)JupaI"O7@l !]#e%GڔSuFJӾ?n8 yBۻ_/者gA.>] SvlEdf7UR K×;:Wrk; wM _]s羫</A̼:A~*]z4mp,q{74ͽl)Ms}O\5˶S]:B'Ww)ݹVP|Qe.hv m3S*WEP5BvO'a: {wwKnmp70ʯ|bm<_xT?|^#FAE! g`sn,G5`#K/|Pterpt"W>*z(`>+sw+ϻ 5-t͂>RN04̜w8^a*G55)՚4w>*xfC3Ȱn|D9KK97{s:ߘ%X6%~r3s` 1KfLʑH&K<#tU[1N+Qj(2qDr)88sd@bw83#L64 0kGsDpWU5kMnG057ӊTikjhsL\Pw>DmS8N#S9Wż) 8=S¼}p9\o\ 7ess ||ɯ[9{ICջ#EŔovZ@-ұw`q>ĮC.~.Kita.M)8 S1R&GWZ1'7r5X8 KƬ* JB֖sDdU 6`*ytťdFfA$b%iyvgUU/D. }Vm/[iGߪq|iaU[R)k8x,kqVE," p A'/`n=‘GIc'g}-хnqLja[b}bYiĔUcK\{(@(U{e0zwTrLxk <95@3R0@Jrm2Dh@&\8ds4@Xʸo0K +i dUZR6$ˌ EnwRO_ g"c$./ӥ~?Mj2(!Yε`&#`͙eZ9uat2W8@j!Xt |9)TL9i%@݅ [lX\:NeԚZRK.RU!)24'J *X9amۡpl>Mj`qn+7fYa3H: 2爄uMypNZiR;MA"3tȁ+8`D0HaQRF#L2`@j%<ѮCn V@Fh >3 A0ga2ӥ鉞h B` TF)Τа .%#Rw8h6|yx&}уb-{{U\kz9V4hKjUNGJam_j,DiU}d$a%ה YSB# wT9?Rt\NJR==b5O4$KV?֝t9{%h4^ڕ$/I Dµ  l1ogf_vWMvAA??-l)o._"<YwR 52y4~X~b,_n&OXN C۪{>q>(Mb.^pw?98Z+m#pcDk{qѯ#Ylڞ6]q(Sv}> [%yˆ7eG_ !R)EvsuAi4<~^x|sp= %J|$wN)M{x@NEZ[4wzS{X1VWI|]vuokD⊥聂Lkt R,܋#]㤘8VEH:Zn6Œaj<0|8w=c3-cZJ8*3T]pSL- Pm*GQes-n~p b9B SR{g&Wvl&E8|$&y]a R\4 %e1pZ@^ei ΋z1/*YQ<=ճTv(s)!`c|CC+%&Yo7aZ!s4[X@,H,5y-Kժ$k^ϻΠKp2C6O5DI>6*cqu:Çct>^Y2&Iaog`>-*hMhyM'Z`5I s_ geq#{?~k/E RҗyÃt,$G;}TWzdPPL`^:}1 B㓧]FޗOqP̧̦×\]Wvs=|i$ؠ^ 2/O'b%0gW=_5,"<1N EZ{D5:}^4wOM!g:/> ??w34e6~ϧ3Ot.Ϙ9OʾhbPL'mf7Wǫk/h>za9+ߟ;e/3wefrТ_ 8Ƌ1ަ}$\`4`c|߱l<6Ocޯ<74Z nH 5d|.sSkVmHYGG!!V >0D}ɷ&C݁Y $oc\U-OwCgAN!}g"OoJ%:%V;)>u_P"/qGBVXiG&`;Zr*l}7 o=xzBL Rb}p ~} 00#\eϿ=œ\ǢFXȆMbzroFkHhĭer,DZZir )BXDsΥJ D#u:h$z[$:))C *CCs -9g6GFeXfLuR,W1[nTFBc Er_m?FdK?Z)`mrD k]O!I}_O5^"}jD9?5mͤTﮗ lv#}nFһ'uL'ۿn2Cja|ÉP_nZr*97J|[3pV$[\r& +Ovw:{/'W { gŐIpt:nH%sL ,od]: ׫2&xEWDT梿|9pé=3n5ț%^==,GW+5,-lcH3H'a}cDLF?*C 1bW3P>mlp6LVnV1[䳵1QOn|x3ZOKlާq,K9*`>GhH6Dş"f0cV?e]b >VnwDOv~.ōE1/[kf!{8Qc;/7mW6q^P:'~u*G)%=a4{l˺DV6NG{c{ Hn6_pN(As5Ar}wM9c Ӱg0͔#8Bܯ9͵)35T3KLوNOv؜`Rwؗy`ň(**ٵTh)fʥ%$ws0 &079SB0H+ B+|CjҸs^v|sԷRSFe?.xn4JHXsd+P7 Uė}{D:bdjYTUEkacn%)*pM)U%>5^)j!M]:W)_pkӫa1?Dáub9. wմ9sVwFUNJ)OwS]V!Izt\=xj$h^I,WwqѵbZcL9j:ϵPZΙGU 4j#ˎ%BWsQuEMg4 $-{s.AܨڍGdp)$_ b n_@ &vec ^R)vCSB՘G;Q =<}`5q7wCy8$ޢd#?Au])fה#_W^X d5暑wl B^L%f.g\=Ca16W,xtWCpϣt,A8&$Tu6u>߬ vw| k!~p$Rbج>Iln e$ӒdapakBP93gW2Pr$.F7[չ N9D"N8fZ! oSYXD6D4DmjAFH*Ml_C?(Bѡ !%fCHq[|oFrɑa !J8jȭ^OA08!^ D}0 /e5BmTΗ*hz])g{NCRnKM/h0+-0NJI}-5U[z+4+eVH )+Wa;[tM㚽tT*G=nqc:@B`Vs&,J{VJ Fm똗[<1<<SFvo5+!,^;LN&pmX"IKIRT9E(I5I,(%9J(!۲!]1+ 1d- Ea>} a^>9h)EDJj`kڵN}R3ǺfIJ0GݒÊ5ٞmE$\cmWS,1(x2N[2:œ"[מ/"H21.?$+mj,kZ2Y#T4'ERG}CHDS6tn>Z+H ٟhABqr4-82:rkDo"]ԅ}Kv=]$mÑ*#Q[ b)g9[b;kr;ovxH硝+oTt}ZxAQUcoG-7%z9!T)ֱeqE:c p:Qo cʹ ‰1^qJ8Bzk63kŁJ"M#Aǒ'Z!cyv ͑~;iJ-;7/fa]2ɥT;Y,(c.adX+<3 "$ (\WubLAZaH\Q-eʲFcL1¡$K +r(GkGc#$K<N69o)ɅUGi>3AetB#G@ gB* z9[(' 2xX:KJ~T9aO(-9ePnwZP9ORNO/8y!;QHOI#%LZdk|Gy)a!ZWl%'AL)Ylݍ섅}MV:< $enn9?<>,Y*_'p}5I8͑Hm3H,BA`\MuƈYZY `zOwnFѹŴynXB`#z5U$]+ mĸ$+ 2n6'{;AЩk=&)g|'RdYf) %VC PCPnW !FcIX)vVJmuܗ^1w sfr} 3 %x>@[AG+,uQgy!23F3 k)rEhFd y 2 Xstd`G>7CI.}(ݬB;nÔn*PZ\v{ϭ v$fL6`<<{9!)pK)e]Vz&S5?lX:םm$Kd=~Voh f4/#r Td8;ϭJOHJ;oĎ7ldvQ öĎk1 vtCT@%#QޏzěgQ-Nуч@&6iא4x88C{zH蠢I;9A]ѥ_;3Wwx g}%.dkfizyEX(;;J6p" T84G'o6qN"CsDkk.F6C)Ekx.Pqg 'aRI6'VXH̐+tZixH^%vrτ ]S naOSrv|l^ڌi_y\Uuu7S'3ژƌ6fkƛo(*Yȱ̲҅,\+LKJC)jT\ד5K ~5YPr``I0`)V 6cXK[Ҋy򳭛@ IcR6~ΆW{RRrFh )""I!rԢ<#(PhEЌKdBH*sTd!mRa[!)@<͠=]/nb;TL¹.sl%x.ؘlTY_]  4#TKA:ͽwyP(ژK+ c{.jz-Yn&Z d{_,l)_Z;;&ͬ;%=xA;;'>@uMNHvwb{c.vJJxse%jrxkpt'5}X:-狏Y1AS 1͐S (ha %L;t|kt.m{)75ȭN[}Wo"+m@!bLo5Gq =ϥo QB̌^sFԡrta)Ca;~V T 9`;5&!xٱ*MYʆUIKy ^>6ftJҐ bJ:yULUQ JJU2Tu9}IS]sl;@gS:omC.R{rWTքfӲ*r2YYi]\7}f5) KIԨ,ըf6XKLe] K˒ V*3!XEM';5H 8N3|#N&<'U aqZRBu-euK$EYBEsV2bjc O$(;F~*%Ai0:v:YfLQ0((RN] HJ6 U{ߧfRJZRed$QRIהI<=VِCa$lMbs˫0i>Oﮫfuk/ 1P8r|b ( / `h _>4Xͱn~4!FYqmxVMfS8f,n=4ɟfhߵ,5eRi1[O_->-ea^?ޔԆNekwo4WD;y{xj̖hJf}h0pR% os_m_O+y{aVvZ[OLfZu\_d!DlSߗJHt|}L(wB޹zT.wSR|u`c:pnC"&"w/n}X;7BKu}9ј7UX~p m4&QӇd7Uxݻ̩$bF}klGӻ&eVV:!z읕V}ojV4s|E>,]`+E =S} QRICQr9".' ՁXve d[k~m\9 w00En)n)U R_wS!Ru*QFGjJ8 7{W۶m%|M܃M$=8)%tȒ(%u;KRf/ERc#QKrvfvvfw8Y\(\w3+P-͠Z5d&@rT+5;zCrJ5A#Tϩ(p`_?uhWJ:O|֍i*Ku*u;3HљiܺUAn5hWJ:%{MChݪGuu;dA${nճ=[UtE{p^蓽sg{=Bq/.ב،$0%p# )%{!)٣$zSNjy:nߓ˺!UhFOq=L73.Jȁ#.)T,).C>]s rF_b4iljrpȥ0nuY. ct-.Hi5{q;\,8>dz!jzɄ"dkyaX煓S`0FɆ '0qc\8(2%'R==y>BT>e,}rKu@1" q5K Y &T̘t$k X Lt4נּ7\(v &v w7I&Xl(X8۟EqFcsFx2rAEL37zfPgHtl[d4<;4UqWI. g3$[';`iGc=0{Vɪ`ovmڽٙW2f ؀̵eP302n 1o+aQEqhDL 6{eiɜC4˲ࢃU? k&܌lբTK_owFKmz ; +i0&0p \ڔscTl"&=FWxU|[Ͳ6!LPAv/t&zG=0~7Р= Gzc)L &||^fy)?o?Wnh3H Sw !<>Iܿ?0^].|~~9Wf _ltguBɷ׽d^2Ioۑ_7[<3Vr _?,dëLfпƦ+ҽz6dMx827 zٯ;WD;kDLfC .^Cv$L_`F[M_P7O4l(ܙk!q ;?^eRj$ubrG{~T!W4~T$:peN/5ҝ'nQچQbM>]GHRmb]UI :i·fd0~~g?+8>Ko@nqYݟߠq0M M,xgM+Lֳt807lx8ˍHsߜD!t6={B-^Ywg.%%rP 6z>O]&HD-,PR vA=SwA'tϽBz+@>9S*X3PH8"< pIcj \1112t9g0SBţf10ıjc{ǭ\wTd??~m=k?y8(T-VuFV78J&?GF*BF LV P k+Sj3 0TP*c.ÊLebbVĒm,Q Ka5b ~X% 9<R$CIh@E*#mvZc$([ZJkWf]3_ٜ7AZu)nE,m' |9ϰ1JyKrr|,8%/bl|ݎ"p>qVrLr5zݓ0L/n>9LV$_s{dB ry)\ksj:\6?OJ\<(!Y D>tMLQN(tֈ˽NwƳ]EP=ص x vG`\X-4[.ŠuitW)ZmY;'&v$Vuje-kÚ悁cʎRDVX6#ba+4ї^2ye/(c:­trĦߚd !3w<^ $߄-ߖt[xf8 $cLa4An ͤssss]N]r]Wm_trkiIhv2j_>Ø;E4T4C箘rx*΢-( kd{#RmzY~3NGfLՃ ?[K\fruSM&8f:F6ij0F9g=݈~^w_iOoz'aShYEZʡh~d0neXKJLՐa^{WdܧsԪւj\mw)$85A+)+3gK9P:ΖC%dU_ض VH#r[MQ^Yx ;$ ;*}w1=NSKJjLq!"JBj F(PUMGMn>쾶IAiN ݼɔ#)&d~fխ=A: (뻟ITGJ&5њ< QX#|3S `my\L!cƇ((S**N5CnF!3%׫qhbݩ yF.ap^,f`8/ɯ΍^S#w&"Xn/kel&r{z酪(w꺇GC Rqױx=J7})8PL&SnL}ڜV]ݭ_=wc'Qf5iWJ:U~sucp֭*|T;X] LY֭a8OVK1HWQ Wk%#~ DŽr"` (GBk|OM9Im>OvRRrHٻI sJ=cbŶ8`)>@lx HӽaAz3Bb uZFk=^#d3B殛MGZAW9CT9җWAŔnQ{f2jB+#QԐxJ,_ɨ(][F+^v 5} 0{Ep6F lk$Ics~IiDQȪUWѫZL UKΓ>̓QCsqΏefZgn6N;x3ξC<[ )ĩSWO-8!qjꄾw;*BbKޭ|-qʳ40.Ťg~[χrL|rs+|1{KM]C26th(C L`JJ 8ˑG6bR IA$.F$T(U2Ry:qAcB\}޶3}9܄CX Zӧe'?`QM,=#'q}p `+=Æ DlW?Y#l0O9G)ԝ> O}r^twŦ&l뤡)0%!urz*2@I-6|XY:NbQ R&Iu r ݿ$ADS-FIA}6oy ˠǮ!)_aQAq"1Tic]`nz$$M1UIBאr1V@Xam{֥Vx[+-wps_Շ J7W .򊴽}[!`V~3{{oo}gݷpmFdmUmly˚1!;kI޺uwQK֒J". Y/ڜ]UgPBԞyPA )XRwθµE>=(^ Yǚc[NrPXcSw $ c5ZA _F%t`0Ks%FPF$MD$(J6% '$={BmO'.@AA!MM$T) m}0`W2FH'+($JQگ܁k7WO]Y߹fK,TZu(\Vod5Vr$wPכ}@qZP͘bJ HUJ16(: cRw_ ]~WXvS Sp4r^ 8jGG((a}.紐Pm|>Y(]Z:8Soy8 Y|3~}fjyR4uێю?i>wN˞-{Ȏx,g ??oA!f{Ic݁fUM3 ho{Yh'ĢL6xRX3`TD ֱ q $L#f<+POLl6wq/as#  $KK}_B ɕR(X c),=$}Yj Y=})μӗfRT8ŽYJK F|;CRߗH+K/Tʌu/K^Yz,b>,u, ~Kj)RDX Y'楇/KR,XnK?RYJ=G|N(K爟I-2 f^!)`aR#R(X NJ?fRkH˥#?bDnCRߗƐ^祗Rh.[=a E,@\}\aY;AKH}_Zppz](ڹՓ$,؏Զw]}eQ?"Bv/KmWOW_z,8CwX4v/lRDA:R"XI!^2K .WOua^ =WOK%$׳fW [=q BLjWҋf)~,ED))Z"Nh; 3c2{vl^.Qu(9<ya=1t-J3?B E.f^38X7VMNC6U'|1T4uŦXW?uP1k~K0a BJ-ҹ| `{//c*jdUiوX0caRXA"$B` 0 MQD)0ٗ) s=g/ l?ClX_$nDZ jc,5L Щ3Ju֩T#5IdFLFݐ/9ޯ&k!Vn{?Ƌ=51z-^Og3{~]~VSz)L0B4D,-%P($Y$H1!pSc& R$?N |N#ؚ%CgCL8 SR 쿚pcYP 5NzZ#A&ICZa9OfXwΟMGʮr?#feYvnٺ)y"}by2JҼ;ݵjK^9inҎ!ETqu7fjW^_J9)谪ҶoCwIqU"ETH' PSUNpc$DKH !F4Uv@$L !e(\XW0S=g-+ }^:6}ޟ'L@dE&I\Y+FCdɡe!# ;#~?;x{^D,X< ^%17BSǧf&6fr"VuzN[Ƭ\\^}GFQ/s]`Vݶ(7툏Y2o9pKvW ^mNW|~2Q6}t.v Ob>}6ebZd9eD!ƞ(ކ5{=:Px0Fu=fr5ay<.}pGO&V'23B!n^$u|tgՌ:GZu/һ[n1!l ˡr/^}3}x%9z cKwO.APEek e˯:aJcz!rEkFHR)О k:iQ]BtujdAc 2֏Hb].&`Ò&g&xd4|jV->?4%i Qީо\.}gIgQesO! DMR"lN84j4nN5hQ)n8Q1Zm7^1`<$]>`sҽ{{hKkݝJ*D8 ZnԆ/gqQV*E\ūyyoQvRφ&vVL;TAW\(Pg8=t_ 9HӳdˌP=B9P!aE'M`8YY:hZ Əԫnr?oZG{5޷~u]r^H;%o:Ũ+TOyZG,@= Uy5Z5i ·v[4f\jם{W2&=J5`ú[U/a#Ě$>Yj8'V up͙r$TCX()F_yȋ<2ppӥ x_%gT!Z%D :T(f 1+xnߌQeO|T"\RWs=_`&>^r!?~wٍ|;]0L֢_N"H y-az{GldBWUyp)+*1bZ2&Eb„ljuJela\>GX5 X aWCrWY>M*2Ir Hl~qY5atCbW'zHjFqAPB%Ub\uLpA>\OJtLLDjb0HUJ@9FS,CF%@`A $ al+y@b  eZw bBĆ2DHG QBS*XE3-MPb!V0P;.+0"D.3u"ԿlLoX( P 0i (E`p%$[c.Po X@n=$&2*]%3·Jt[Uص&H[3|RB;ןM_MRDaX H=Cl֖X`>;f+\PHFDqi"-$ٻFrW}%. Vě[mN}zr$o~&rU2eݎpyQ2  @jxlnmX]߶H.IP/"i'|vbax'Z#Xk G{x0iU'ް"hgىʁ8O5ywf fVmou<ݶh˿3Gȼmg^Z}f~Z}}[!p{H.Iou09tnw/C'[ai 2x~H/d/žx nbi^+\oFGžc$fՋ3L޽|vmX/='H v_iwri=G[Gvy9.{HOOȼ遮q<#,Zp*9Y~>2)\/M>.]Y#h;Dx "h07~Ҿ!5~{yиW;z ๢BoQYmv:د CbzfFdl6Kv2 Ik.)]zn2wӓzv_jG[m[B0I!H>_\^F8S~̪۳:r6~yBZe)҉ɪl4Ůe=!FZ_w@92Hƿ^AW/{M'=$影/ #8Q4.GUX_^x;z@+Ѕ5SNƷY ڷ5TcS0y]Kb@(T^ %&'ڣ%EO3{W3ADI3f%E. Z$n2QJ-;(&zl_+J:j~*OL,U@Y)n#DHx0e2^KT<,>Ve2ﮊ(.Fc\ֶD6H t悭ԋ`tι#Bwq:,͏ 7r#J ٢ Cѿ61Z]dbd@E0f1əGDIl +.Nk'eOk`<ҸY|-z m% vSHbf/& [[}6Z;^qQV责ۏ "ئP RNDJoiJRek/)Cػv-؉9>by_/2F ̖RbQEpP)VQ%n 0*k x5U7r%_~_,R;%$yq,AHs$J 5mֆi=K1 N"qePIu,mZ#U4 QXt T9Y#@-v@#RsoKꂈ2*Zk,ՌC%`5?}$}9?uyD%$Jl6ѲN;Bp9O4yy;\Nb՞|⡙( |-"N{>B+@t4-]_W3-Hi ȋmzd>i(~o䡴AG6ǹwADFD3_CĊ,Y@SX#ǟu{83օyY-Ғ5VQEv*!@r=ӧOƇgbI?/[ Vr󨫠q%=Iћ0a_y:"dhƴJ]|ւ,Ivl.\CBޯ+O]L+mU߀X'욛ZMu2ƚ7 ʐ c&xX֞o@(sw,\]4J^hM;^Dzdꤣ$EhDC7"YZ]H~rZis DLbuO,[abœ i JTӻwo1#^4N.ҭf4䢪PUh,\V!D7>!..[G8Tc2O"D*@أ3$~ 3arÓ|.[| 16yj6Kp$zV(yd Pom?{`^:V=BkM,We|\Aw>CdNӝώphۋts,gl~Ƞ}T`U:sBŔu>&_~v(`/;etg:*sR@$|OȖT ]Llqș ̤a/29Ҡ?VUCvPx- a=F%k(E앫 Vv %Lف~?ڃ 1'8:U@Yce%l:Mo>=ںcuce.l=ca=Z#U^XPBMYP]piWt75zdtl^׋uq{[ `FUJؿJxZ˪CP<rA[za^~] ,jM,ӧ6"ȒE>^~]x[v(:Vk:2t*}̃ѩcٲumX,몍Oث8@v&](xVFac`sL DEX&?R)D4l&dděKd0 O CZ|u<8Hj9ǭQ@mQY@-+B٩vayu;0.FALOXz])5z|j8hx;vVXkDHI|9gs(͊U+eBIVY!Ϗa/W=1g=(5.Bg/~ 92–DhZUzÁB<"`˲CifYDVdeC;Dc<{BY={Y?~wDڰ:+(5ٻ6$õXؾ n6`O6ER3$4)rJm_=]GWwsw32Fɾ=:˩(0VIڔ7)[%diJ8(1y+G] m =5y Φh)mpTDF- D̕F (Ie4q`qae{8T3p(k ld0^Fed)!8eu;S*o Uru˵w"_qY.Uޔ3O $'BIa;tPXȜb5G7GV 1#11D,Dit2 YAD'pOWd}2Ű]9uMr"0 UQi%Å!$(yHhtK;e{c!*`>X9xz -1s_h8Vޱ/WzYet$4* 4*Js(E4rIxFmfկ;oXEZm80 7dyXhxU  t8+T_k $FAMj@.ÀE^PDo*z #Mc sS USTRA hR:ÁJ>:~T%.BI APnD)Sd! u>=;/D*E^b'j_0*@mR :xѤpLIAI|׉Gf?($FؗV0ijh[!Gqҝ ,i0M|t!9Lz؛E-?^梟%iVP#z+$WQ"& T("LlI< L"I*f@ թ)a" q-%rNvQ$\T/zC;נװgɘj Q(tR.أ!ND*FhxPph*\gFU!6jaHQ4X hO#˙郤.4U:ц+;JqO}9RG&d( b1z,E=eqO1ޜ*fT)x tMK E%D'!X-AX8x]_KqL޴3Lr4ެ I^./"r;(f1˓)FJQ_,q^ t\|F 5 G }Qυ`4wuX tV4dtHm ,<' # HBqޛ[VߦD0JayT_\N|H$x#Bhr$qYhzTE ^i~Τ.Z{Hͅ6ƩIJD'7P(&UP%A&h,%ςt/U(?ۗ4S.HקUa"V DJ4#xr4F&>GrwN"FtB2cUna0ʣ&0/Q=a4Y5 MGD秧gE R˳S* Tb}q_BB{E5ocIoL?煠C,7*/` 1[P5{ۗ]siB|. iM0)d4ӌC:`Bb8jÃJ((sh#"A-xAtnBB.uorO\|X;K]Ϯ{ XSM("7ڕ do̯l9>&FJA߈KUi6@>uiy$cI߲'ge$0{ ~R#KF?ɗP0 t%82%Cp fDoiI19?5ju͕GN* \B!>)_7z[fW<6ܨFYָEJnD!Xz7eG]'ynnxp|}{kZ}󖲓FqjP''/7ɎƋ2l1sc>'!Xc E;}::~6x͑%D2bsJ螓?Uhȫ8K!/hKF15Sgf0[b|F2٪i' hvh4[{7_5Ŵ[u̢_^+<[%|=cWO=2^&˷ 8 ى=nYdKxnb$p@<] o}DԒ[2XOo쮟&aC_a6ضKTrYߎgV_0q،G˝x2o3py% m}DFUjdW@/]twi֍c@]uq]=v7M gb_N|qgk -/?G#0 Dbx>wB+z"F̿YLf>-6=;jv3\_˅aJ5RSeLWf;Rf~iQJ|iGã](Ls{yNQ9>ϟqG r0DbgEO<6|Ff:{A jhݽ3hP?z|_ ['z-uCŰJc޺3% `gc?1~6\ɞ1S 3ЖR9ƆFK {bT$-%N)ţ<&] Fc K6HL# RDnh_&ѺDk+ܶ%Z;#G81ҎCEJ\2KtKq:cP/2)K>vPy,暋tkcc,WǬ1[Sx3rW,@ȹr<!*z\ukZ۹6|_/Dz'ڨC2z.KI媟/[窗~q]?r;pWtŽD)R}w{xY<.S8>Ο7vN*)(;6xn>9m* 24$3l fkw -5֊O"h۟a;7Tcs5no0լUɺj֒ BvɡpG7T7J5hB}-YL.2"3h2y5/GS$5$˴ L9J ~VX5Ԓ w^d0WC\? +iS A >4c2H@,29&7z50{:? /SSj(?LYmRj]& 6vEpn[ wLK=1MRE({9a؅8(}PyTPÙf=)iNISxK1p&j>xˋ/$DJ"h%%,Xs.XX:bwN>pO*TO aJI^@ǹJ&z*c9U4_ -@hx2!oRuUD v(lb R:@00ұe?%9OY`[uw)~J dJDR&)b{)هRԴ9cyC竮kUf/d;6gX)btgdDyNb#)gsu.v0t08L_bi-`z;0P)LZ 7ڻB SYhx/w]j7L~~j@AGB~u{!~@ցIe֋/ z~?Ѵ<]#kď4F@tNj~͋ywoc(^LÊ"5'}v+To Wʶ~l |X[C C(ݻ؞ENH! t>{b=jq0S V>_]u:ffBrł"-%&.rrv" J \Xo:+4*A+R(Ț։rWFjGqQ>ODa`5輍ʢXt`ܳ4/B3>[SF6/)+p.t3hsx[2Ւ“5۔UC݉Iq%`A7BC,gȴ!rbV7ىW 7zӄPބ s7S~o)lГ jm1M^ovl1PxGdM}LP#/H!)L3SSbj#P0Kuz e^>Xp{겨OGJeN]w 8|s[踃SQ" ]b (N+|V>v;  j{3X 'bf( (N\CX~1u8|cun7d_9?^MnP휴'mBBpڍV%K6VќA~nbŀ> {o&$q@q;8 09քߑ% !܇q `#vFyzphGP '`/ |uBWw/cwJ)6R&YBʙNYe0z{4$5% 8%u,!9'N9$Xw9Oڢ![V (*=6I(p'-2mbКK{bx$,@puqS$p)Rf7_+e$)J*$1Q1JhM+%5-$\G!Պ1PCVVIEbH5:@!~'v=tq^,+Kk)cSdK)]'Em_42 fJ Ne|0qK QҚ?׆VS8 TN5B3XΌ;O~qZCO^,Ǵ\ڬ[s9PLN5"qbO:^A;V:߿6ֹ &Y]ǎ Dљig #HJ2!ۡV^%`r}\̧q?zG5jЎ*t /^ ˜7^ ho2bJjlIQMxYJ㭦SU9 +D2Rk"/LpZkDɌm69 e5J^юMq:eZlv#,U)Q%P.*gb4䛓`<_)@mvu5 H-J%c֙iJ`qfCgY<~·dwzfU k:C/" IOIYB;J [Cg iK\TuR1~bgV/#+w>U ~&2wa~:SGSSOΐR ^~E|)Ȃ'-Яm@I1 ]:~5Zη[~tHF`H]V ( ”l<&$^ݔj0pW8DtRX٢]81I9Ԥ dCZoj}xύ=݀2!%ϊ= <7t~1{AGϔ= E%9St1Ό=݀6ĞnnQpf1bc?E%΋ӀyQr7r(RʱGmq6t8F]5iaQ=]GOmħö!FgQt&VZJΉ׀]9x x$O*aODpO*5HJSy=RrWAyq;JYqWgh4hnFwN 5~^ah~=a>iNnOVLޙ뒔Ry*i2yPOm!_SjK qK!_-jŖ?tɥݧ; 7,iCF1А7z~9M^PЮܦ(6djs !,qڞ!ke5 y6]v w` f6%ke[{LDr7[(H[MF+.^ߙoŽH>?ª(BQ U).0)A$BJn;ipĄKe@!.u .ʺ8yX|yr<݃P`Hx&c&df ˂(w Ur/s6VE`Z_nF.h-vv*UɯEY.ft7y.֋x;/:ӿ7PNq G7 3黾ܨU^_Ow9/6_ (LqPZ1*=ӒM,S%ËfiAp#Rׁsڕ uIE&Ŵ@me5_YH=^~>`Y3Ǥ^ s,qޢ5 19HǿY־ڗO0Oָ[V{o훋~otث_Qv EUgX<1ѣ=M Np|xHJT딞* m([+u1P*=n^P ?/?Y-pChN_:UslZl"9\D:csq?O?IM6Sb1ߥߍR7U,3AXnf/曐Hv)y1b9|FHY0 Oo>뛋|>W>xd̳'>kv6XB/ɾ_aa9d:yW%rɡoS=re ؖ8n]][-UPNڏiT vzƠrx}ops7 9xʕkλ)s>_>t34e"svaٺ@] `YǶit;oʴ3P9[g>¿pOQC%니,JGX@5<~j ~[pXGo >oפR.j2Rt1xuF4';w}Xgڟ;-̠F{`Ɲd|Y;Nf׎!C9̻Nj-sLk*S FVZ=rbyw"5 udnt=om̊g-COaւk]9V_v$Ѭ+>%w/Sػ?|㌂?)$NKR@Jipq[q= @?[7\U] #ZݻvX\f+(X)Hi~% (!7 Yj=g$V㖟xvr z>`m^()Hˈ)auʭQny1_t6, 8sBZegP/0/eư[ti]S_קzuTx_*sP9Eun𛥺sj D ex}Yz-pc| qq c)*@CPa܌ 4dc3{PV'5b0O[TOGU𿇿Hց DgcRH R ih*RK|kMmz KiX^U!s~7}gC{[!ilByEѨ@vh4tZs$\vtՉ{r[Vfk,V@+'ݹye{Dm"'d-#(L/Z\¬e٣.cƏǞB$1C\/vš-86& igN Ȥ0Y0;BR~ ^81UJ]0ލ7W]y e=8A!䣉,S 1'oU٪.UFWn.##Fa |itQ ,쨙Ìh'E ( !LtG@wQcٷ0i *%CvDQmeig}Vb& KR(bB$}5 I4 OEF"ĨstHF /亊R 樫-hٕ:u݋;a6 )Qyo:#R}>|$:01-~ghmw27Mev쯌`eL`#'1uqàS0SSLnN>.Lfl.%)|U̟cI彍~]x!&;(]^O &og2/1L~c=P!8زܨ$3r ͇WZ VSnOv?]ŗ?3ϓAl#%Z^07h]nUYֆv)//a 1#s@WZ42u_)A D;+w.gvE N)YtÁw?"[LC?2Lg&L^'TQfpAaOGlRL$aJC6&*TXy 4IKRFq?es<]tUݕ贳8_>AhӲ|@D+EHɬrBΚ1H@ScQӈ1#F'- Z 3+U bδ ?`i*qsZknhrjT3M6g*2Ǜ* {5T\AZxAf` "5%tVB2Zvr$K+?VZdSԡaoцvXsmmR2K No;K\:L&kdP90U>9!${NoCWz_}PP]K1T?Š-61˃O2]-iyr+C^Yp+<~Q^;rqES΍jv/r\;ȿ\z}؁#1S9Z[U ؒJHcNڡ|RH'#Gk,8-- _r725\5MI4XA'*~ h8Y σwKąL E:p-gbWlˆȳ1U#풟w _Ạ FI Q)l ֺ:Jg@lcBeBGbZkQ kx8|{n^}ZM (O! ;e1S.2>o}&A 9 1&^!5]PmoDͨ$sC" S8Q$6&gOMLyCʽBS(u-4 @Lp(H}DpHR*RQc{ol;$p545& Ls}6,E'x6yL" rBCRI~1m j!Zc$RLPٹc6Z:]\^]ONr}X:+ J)"Ź L: #lw@Ǿ /)2Z`|o㸲N:vS|8f>L2Ns> >1J[:y <!ŠM2@c`|d{<\`:Fo#ϻD[_M> > Rm1X(4Ib)0a1'5+eaYǨMi7[?u~NsSjZ'k)"6zz I{tdJ`bag遲$!Ur*a Ry½-x3NJ $@ <(e3{1Θ fN"u@Α`RJ.&&szCM-det4Ɓ}0(Hh>WVs$TAMq~i#QהW5-6Mz@}ѹt8~~T]RJ΃7F`a'I>}1C!fB:XJxQOy4Fތ b۲Z۴FzɝZI6 sr"M8GJJ^׷~v O2˭,dৠHqp^qyoMwp> &E-89+0V`!AxcHv)*Snrck {͘9'D2pA:bR40xi#5r#.:oTqT-NgirAvihE KL88gO-:Tᄎ^,oLל&EO9S wWLG Na%j;Aa2X<S>椴ŎCz`,:OՔME[TCQ?A'wuCZZ,o]gpԩP?gQ#0s{j-#wzQapylg}e0&rU鏞} ?;O~Oܹ<򠘱TFC&)^K) %ćEQRS0GKuԼ.BTRvMuJ\$1JKRS4JRX(.7EE{DdJ|7*Vv-Z=?oM0yb޿ȳJ.=< kDxA92;˽cA[&0*xͬƪH7Z5a2Wc oz\Za,ʨM$ʙFs&e^D=8՞m"HS~Zn DQ|s7EQ^(%vȃ׿Sna.\kHHM%3l鴸>b"DѵDKmFva`< qo|{EQ»qP ^"kMMs@7Iœ8A_Ii$+LwFjw&]UׄQ=s۔t'7?cm1ٻ.'Z)mF0 <_)!1b)XSBN0сY zLX`zyM`%\Iؠ~ JudIͧL*ähJKxڥU,RE5]l #|[T%@Dɲ祈0$pr,vݍBkF5i\ZN:g' ;W̦g'h6ZFr@RhǞb]Z 5y>[?Ysu4lK+#?^!Ge.jn.x7:Ö!&Y9u:w^n&EAiVOX؁ N,{Vɋ|F~2@^qaN盒37R& 6BS]th=y T_|ӵ253\t xS]; w7Ϡr6wlfZ#F=ZxDc{g NGo6W 󬈓;ߎ3e dk: s\=C+m$G.,  L4T@,ɒlW J>dJ5@Aw?cr}6n~O''m&xe[-}sq0^UsӿgE^>͹|o\RkWmˤia@lz2Ŵ9c_Kcr0t8 VM7J)C &шTZn;SV"A(/zi]' -㓰Hȷ-Rcӷ.g7aԭըfԂ?| HZEjQ .XRdoU3vc6|CQuiX.ʤv=Ei˓吓I^7<--?.aӋ+5}n|{?=RjFhL>x?u[- N;n{)! m4n-$*:3l7MozVK>S};fӭbc[=[ E`2ۗVޞ-{nB0J]U{N}7'ZןVH5VS@8e[9KI~.[MHDk.]8GƐ>ٔǯjȵWLyɕ~>M*~b>dv_> qgmz9c79Y+vjGHzY9d/ 瑼5c>`P$kyORZgZ Qs"rF?w>ndqlșӷ`M /3Bb>\uD?c44˛5~\O##2r0R'NyJC2 U [gSF<ݕd-FL-aw?>']6yb~Bg9^xgpXZ8rEVXfu:_[hNrCQC*=SI L ļɬoW|^{ːVb7ڽ7W9wUnUFC*ߎǪރ>~=\#wϔؿ0MͣG?.lG_Ѩe/uL'h)Oۜ&tçul[iw]g]y! y,\Y'pIq2vMjON!I>Xe 6Ҽ*"Ire%z~2[ן?=7o 7Woݻ\ n^,ߏКuOh(37ktvO+>Y_3j005=`*۴sVR?O (u/J~OH*(*Ôz8j\_!8N- AVL{1^Ít*.A>QKka3bC(AwS8~1f_hLmM_S?nHƭѺҠԾc~Wnz;Ӻ\Dd `7֭tuA/ѩ|^䩔ogݪkH.U2ܰ)VSr($rHbN:Df2mQD6v0pf1Ɍ6 NrbڻLϬň7&WZZ5Y'TMmo' v:G\a4ULE17# .K~=I=s8I4ĊH%fj\` Cuey5!~6[-W8j+c| L:RA,P:D)S Ziw$CxzEl1#^1hD=czq\7lR,8, 99a8L.+#'CYs 4JIŪZr<*͒(u;GA9 (H!q4E` bYF׶К#7*jte wP8G^$#`j!6&fy;2=[|JKi*'NZ4<{bA`0*җC=Zj* ?5h&3cy-0 7LM@ML &Gw1Zws4!暉zL}IJ2%Q@g.XJA9Uu< PqY0:Uv$XFoA*Z߶%9ouq$| *oPZz42^h zh;O};i<=RkfY،,R?!eFϺI%"r1sL{G{=2@)>Y>_髎Rebc%E&Lgsrp0j0Yk3>t™J!8 q! j&z˵Mi*LըHф0"`*O 8^.tv;%[6~q+,TL fdJnGբW}cXޖnEkV D)1H3!D " 5r>al33q Cs7 d~Yx21P*I8qC&ڎqnQp.W;Kddȣe"|$]UJ\U*#w:`) fROL@BT4zHhw׶K[^E%+>1&É UD$X>0fSVnܸɸu+M!IM(JitwI d@2L]hmW9d2^uF$AfPGn0!DuZhcԣV}H7l$j/Bg&5I gF*#>ƝekP~tww5rL| ϸTEy.Ml D&M &i Lyԇ=Om_Γ@W>\u*(D2͒='V#gQ!L壻[!IlzaxUw ?n"l QJ'9'`JmF?+e.3F{B6އ?zd4CvJ88W(ВE#t3 =ix`@y|YVLƔ~F A$Ykh_le B# NRtAtdRTu4J'.zy>.):?4%ʁ"d(#?4דj2X&űp)z5zC€邟 fs>2 [=yjZYo\۰x3I{CR\7+b$K1{ʚa7^G>[Φa}(3:<;LKBO!RxC V cO½^h /׎ay²L͓"D DEFh- 8ޠҠ' ʙZ@(+Hib2cVh0Mc1,‡4Wj@c7ÏJG&8uW`|pr?u>3YtQ:&f6 b?3FĀ<?{H/sڴhi>usnoF;_zT'1>~e1vU~jiL9+ʈP931FK)?2 p2:I$sɈ\Yy.sE@F,Sr3cv>]0"Ls|7/ }4[?-R71%I F XZdHp ܓm&fi$'Sb /_\)@@>"ԁ^?9HXx`b(3v@\MYǀ>Ae*;M_O8m@p@# WD> [)b`ءmY\C8&.s MYv%HjvfwK6NHtuvvu^$=1冕@ྉPFqW$/}"SՁl^ ȡ W]䍠"x aJhLgΩlg\֍K ;7BZnPpm:[8swJn( EH{ 6;0&\]X6 G-7ځxN\c#đa!%yE|gM.д+vϷHXō MܩB@M@>~WДĹ5+ mN*oLD"s;)!X&(Vz NS;@pcF `6a$$af [̙Ht£3m9?9XiT]nUJjQ0}㾕oG(obxi^.ǧ_ݜ~_ E Or&|96%utB׃ ʖePG$bJr~%EC"(O_ђ5ق[˷{?nφG@ K=STQfpAaOGlRL$aJC6&*TX!KRJsF ђF_?}jhj$mDOW'jpeOneL]5?(g*3͟nR8'OU=p+CL䦭1,8;͆ǟAg93ofktpuD!7;B_edKIӊFoǞ!aI%Eg儧Ki"{%rP,yÕo”6 ְRPƫSME٣TS*4t)]QZyh2OAWy*WkĔ-ҟ5b^9o dM"r}̾[䶺]F/t&oyf3܎vݑ觠iz}[b4Iގ(\8 _g-&ܭXk:_l75OwTO80'kW;!%F2^$z+ߧt!c ?I~NJ `{ӓЈqZ<Щ/griJc߭JC.P+Z v\XĬ 8e>8v h`/ XTn|"^Jl*EId)֑͕4|ʜ'05eVkGX2H"g,q6Q{-K1 *'$;R\,Ztؚۖa~N SL$ F \HBtC&'PbpZny/HHk~2]i!Vyy&#ZJ9I/CōULHB&hh) )פQ7#ݑsrj)uB\ıBRXz䤂pr5E5C ] _7Tkr.>\r?iPm ~kK'ϏOkL^s)1JG#T|T&\J X!7 ŬUB̞\;Np0֔ܽ yҵLQ&7_//;'6W|86+of4S__⹬Q2ewI8ȁK«n+A!rfWҵ<| J(g#E.1%}`r\d&Etxq %sg|I;PAiJ*OR3qόsY@[88<\{m_YC?ib6?wɕ'Too=e,Dy&H Җ/2APa+]!ζj3jpwTu& υ$$>D!\6ky) ]TJfVZUX$OLƆ\f$bEµo_۴ *$d?I+*jU5Ʀru(jBS;mH!γNzz8?ԫ/ -<7iv_Dq"J!-F d1W(Gƫ_0J esG25(_ -Țksfj-RƘ3<+.gKPA| !c† DU4/!cD,2`SڢH:UmU3+ 6B2j:(I"s,fY}\(>fȱi`: MSფQٜFp` (R leR#(Cp9V4Xjk`w2wR (ύUUS)lآ6΁mg~2Jk\Iŏp tMKN8-NjsRI\I/\Mzr[,ɚL'1!7ޙkXQn?UI'J,0v*$\r!0rz!Pm1E`exkzDw(=lf}bZw]6ZS+)"MG~UQiy%\vh\rr`uy6D( aE -BTjacE PhNѥ!:Q!u%"PwRe' y>gv+အ|{88 VmphmJ2UT4&(.L1s~hpuf?4x`ue>f!YTFJi lHhryan "Du"DŻ\цdt,zD "_s?d#˒iT"X&&ʐwN=! Fc- []$.CRLkybc3'U|-01`Q`stK"0"'+;LyB;&wN.TRJD>7a i3Vpt+pYZP\*0sQ"/lb!uk18?N $L1y`7 U&{M>]5{-L=)e-WG!f}b֦r0]JߣסV;yH)46仔[)0r\DrQr4LG|wq/8*uT Cԅ*W"8]E?:lbJ6gػ޶dWY̶~`Y$$4W[%k$N8TSL;DnI8"쯪]2 nwԸ:V.@b^5H~Oh[& x whś N_Pź?8m}hg (ě ;-$;OhҀS9V:dBF,4^<xQ p5J3dXt]ƫYbn~ǯ*2@x}΁Cߨީ\T>1z]P/h^C6G~vT^dӻ6ξ_ŶLbn6jv;@lcI^8hEFm#|b2f:]noPdm-2}E]p= kk^zJ:ШB Ik"Q8 v ̱X$uH;8iM n!Qe[HA73Dkg}|TX6>MD*WnqF7iTA1ǔ0 5`H4̹DkTV2jR MYgwB&[Ŭ)/eڿ>h) _Ucݔ]/UR]m]mB MRW~wVʇ/}g'nJ҃yHW1q  D'm&hRsN.NJ'5R )oY B%Jr +!A4 P~ld}v.Z-Rj5ˋKlXb"3c1 ԰)P{ޯ64-1v2aZe!N߾}kE4h/5 @D=ECab,\~ex`W{>,B 俦i$*!8X*B/0f wvH 3x4#^ ҄ofAVUVZOZlԵ%yMBFL64FizAܙ>I,tj.2@|C@wGqa-(B4,@'TH uL4g } AuEԁ9F Au3ϟ-w߄*x z}LS "@L3-ƶCg% bCwA[!"J|4A'|2z% AĊor})'lP/ $`Uѻu j6Ĩ|5^]k7]+N= jϤMY55wr5wކ~CdcPOMQFqku'ֹlW7Õ$zc*ḣz{s/hvg`m0);0&yGv34{62Mϥ Z8sH,!0\l >Ywrxk P^ dҏPE$,{ :;P& t!] Pe57ځ93V|q}trGqd`"5/c`ō uC*&P@8q Or'+h]o!!ŕi,PI'}q|dLY'S28:$M' H8^c`GN* yP^ŷ>;<eC5/ вk7Tv÷fܘmeOO.k5(A@k1a5{:41vd Y=ՒC]P%U|휚c,&]gp47c^_E0w7_x矎.}#XCk[Y0`cQ7Ld`瘫cv=]_nrEA1 ~O 'h5I'Ͼ!ʻ^i藲>!zzfԝ[hl,N>d(zsF O0D0sXfu XJ| D ET$gX/ x;YNLVTqbsҟ}\߫#Xv6C5!}AO%P`) V3KQqlx!dr'kWZKzXRvXx5\4gD淑W;8݅[ƫIf㱊ݼ>F3ypJ5 ml/W {R *AͶr@Kѝ\5RRRH-W4gPRv\%V,f' சOf¼5V) \n?Ȇ0Ƚl "9ѱ~2k=x(D 6yMSC쬚o0§ݦB%܉Ȃ1"lkD/$\w= 6|!Ld21Ļ>kJo>⹤(&*)*f ~[{J *u1ȓ^IV4̛' )sE{:$ni+s<>v,7Ԣu% yv_a'yyG!{iN‹/G{%6! a"z8\5qֳT>% Z(z&<@;p8&r}QEmu|~""o+{*ϦB4sa]t)?M8aoyu}û]I8jJcKTYdMRP/$8/QT6t~3J@l$WTHɭ]-gCűݵݣm6?!h'gwI R{V愭vߏ{?Ï{7IW>Go>{u5kn_d7XU娑%nKaDqX0=+xxMsKr&>CęGףǝہ>!MlLԾٌ)nAl6kwF%(ܻwk'wۓ`'gݫޠxo]|Hskz؍_1$ԣEh>#$Ve9@PwbUD8Nlv3g4STvZE]j^*KeI@,'9OIؓZ]I.;_\<ڕZZ,;vu_ ۵BL,ϖTf-wsv,ۆ'4OWk>_/goouIFkrIk댌 d43OGVs/ ٻZh?W_9,c 9|F#4h*'yG9S:)rbç 9p( iO 1pf&v!.u4J‘nSJΦ88*ehGh?sΎ|(B;*G;LZT^fyKj}V,i>*[ .x' #6$]-3-jI$؊IEҬ"ިYᦁr@QCý82B2zE9e`oS3NhAy%SMwhGqkFw_Iz}jnw&I$Am%–x[*ըJ49QDJK@_Qz('2Dy^Ɉ֥^QzZE]jȮ(h RQIT4R˪W^"JAddNJW:'JOIKMk+J/RTy\(۝q|7NBʦU{{2NX⥙ 5K]ԅU yJE{+Y_Ǒ|l\(tQr+EC> ^d\mg3ryf5CMAv8+); Z(a `m./귯 K[z;My3)O頃̀9s]+ Gjr饮$UGmSȓߕ))%8{oLd*Ç"ڣ=p(6z0>kHIfr=Su/u56 $\sS"=} ȵ܋ty5` $)xL1wJ\`3as[7PBfF'BIIJ9Bٻ\JFiDc1 ό˱(9ZGA&KC-Q(9Ea8?v +MWmaxLHDŽIe1 յ}٥mLD)ig>#Tg9r!<+sL 93/hN 4Y']p8ڬ;;C=L=HGQAdPp%-zH UFEZL4ȇDPRoF`f *ʏt@u[@MP=)O`R#Bߧc?\.{A[@B5NU#HN>GoD [|)X*ȺJW 9ûx =>eLxDwsEǯ}1NN1T4'WXXt6m*qg`Bp-6tmH9¹@m*A/E{ě8}JE]j`z겗 P lK vh]TpqF1NRzNg*!NPX>r~99\lҗ_+ܔRu޽~zxy_^G=l&Uɦelݒٯq,KC_YjW9Pr7SnK=!'h'LL>R{7n]uhΘ}Mk$Cz&[! SZM>CnF)x:HNo4ng\MxޭݗDB~r0xb\鹰j|> Y=n+XٵR/R;Uvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000004754437515131177561017727 0ustar rootrootJan 12 13:06:40 crc systemd[1]: Starting Kubernetes Kubelet... Jan 12 13:06:40 crc restorecon[4576]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 12 13:06:40 crc restorecon[4576]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 12 13:06:41 crc kubenswrapper[4580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 12 13:06:41 crc kubenswrapper[4580]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 12 13:06:41 crc kubenswrapper[4580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 12 13:06:41 crc kubenswrapper[4580]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 12 13:06:41 crc kubenswrapper[4580]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 12 13:06:41 crc kubenswrapper[4580]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.140144 4580 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144617 4580 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144648 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144652 4580 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144656 4580 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144661 4580 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144665 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144669 4580 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144675 4580 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144680 4580 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144686 4580 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144691 4580 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144695 4580 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144699 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144703 4580 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144706 4580 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144711 4580 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144717 4580 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144722 4580 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144726 4580 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144730 4580 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144733 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144737 4580 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144742 4580 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144746 4580 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144750 4580 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144753 4580 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144757 4580 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144760 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144764 4580 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144767 4580 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144770 4580 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144774 4580 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144777 4580 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144786 4580 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144789 4580 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144793 4580 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144796 4580 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144799 4580 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144803 4580 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144806 4580 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144812 4580 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144815 4580 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144820 4580 feature_gate.go:330] unrecognized feature gate: Example Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144823 4580 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144827 4580 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144830 4580 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144834 4580 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144837 4580 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144840 4580 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144844 4580 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144848 4580 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144853 4580 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144857 4580 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144860 4580 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144864 4580 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144867 4580 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144872 4580 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144878 4580 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144883 4580 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144889 4580 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144893 4580 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144898 4580 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144903 4580 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144906 4580 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144910 4580 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144914 4580 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144917 4580 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144921 4580 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144924 4580 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144927 4580 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.144931 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145582 4580 flags.go:64] FLAG: --address="0.0.0.0" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145597 4580 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145605 4580 flags.go:64] FLAG: --anonymous-auth="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145611 4580 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145616 4580 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145620 4580 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145625 4580 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145630 4580 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145635 4580 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145639 4580 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145647 4580 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145654 4580 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145660 4580 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145667 4580 flags.go:64] FLAG: --cgroup-root="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145672 4580 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145677 4580 flags.go:64] FLAG: --client-ca-file="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145682 4580 flags.go:64] FLAG: --cloud-config="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145687 4580 flags.go:64] FLAG: --cloud-provider="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145692 4580 flags.go:64] FLAG: --cluster-dns="[]" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145698 4580 flags.go:64] FLAG: --cluster-domain="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145702 4580 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145706 4580 flags.go:64] FLAG: --config-dir="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145710 4580 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145715 4580 flags.go:64] FLAG: --container-log-max-files="5" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145720 4580 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145724 4580 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145729 4580 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145733 4580 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145737 4580 flags.go:64] FLAG: --contention-profiling="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145741 4580 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145745 4580 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145750 4580 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145754 4580 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145759 4580 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145763 4580 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145767 4580 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145772 4580 flags.go:64] FLAG: --enable-load-reader="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145775 4580 flags.go:64] FLAG: --enable-server="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145780 4580 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145785 4580 flags.go:64] FLAG: --event-burst="100" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145789 4580 flags.go:64] FLAG: --event-qps="50" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145794 4580 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145798 4580 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145802 4580 flags.go:64] FLAG: --eviction-hard="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145808 4580 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145812 4580 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145816 4580 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145820 4580 flags.go:64] FLAG: --eviction-soft="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145824 4580 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145828 4580 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145832 4580 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145836 4580 flags.go:64] FLAG: --experimental-mounter-path="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145840 4580 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145844 4580 flags.go:64] FLAG: --fail-swap-on="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145848 4580 flags.go:64] FLAG: --feature-gates="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145852 4580 flags.go:64] FLAG: --file-check-frequency="20s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145857 4580 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145862 4580 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145867 4580 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145871 4580 flags.go:64] FLAG: --healthz-port="10248" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145875 4580 flags.go:64] FLAG: --help="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145879 4580 flags.go:64] FLAG: --hostname-override="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145883 4580 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145887 4580 flags.go:64] FLAG: --http-check-frequency="20s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145891 4580 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145895 4580 flags.go:64] FLAG: --image-credential-provider-config="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145899 4580 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145903 4580 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145907 4580 flags.go:64] FLAG: --image-service-endpoint="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145911 4580 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145914 4580 flags.go:64] FLAG: --kube-api-burst="100" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145918 4580 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145922 4580 flags.go:64] FLAG: --kube-api-qps="50" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145926 4580 flags.go:64] FLAG: --kube-reserved="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145930 4580 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145934 4580 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145938 4580 flags.go:64] FLAG: --kubelet-cgroups="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145942 4580 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145945 4580 flags.go:64] FLAG: --lock-file="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145949 4580 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145953 4580 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145957 4580 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145968 4580 flags.go:64] FLAG: --log-json-split-stream="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145972 4580 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145976 4580 flags.go:64] FLAG: --log-text-split-stream="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145979 4580 flags.go:64] FLAG: --logging-format="text" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145983 4580 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145987 4580 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145991 4580 flags.go:64] FLAG: --manifest-url="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.145995 4580 flags.go:64] FLAG: --manifest-url-header="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146000 4580 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146006 4580 flags.go:64] FLAG: --max-open-files="1000000" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146012 4580 flags.go:64] FLAG: --max-pods="110" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146016 4580 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146022 4580 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146027 4580 flags.go:64] FLAG: --memory-manager-policy="None" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146032 4580 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146037 4580 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146041 4580 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146045 4580 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146059 4580 flags.go:64] FLAG: --node-status-max-images="50" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146063 4580 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146067 4580 flags.go:64] FLAG: --oom-score-adj="-999" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146071 4580 flags.go:64] FLAG: --pod-cidr="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146075 4580 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146082 4580 flags.go:64] FLAG: --pod-manifest-path="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146086 4580 flags.go:64] FLAG: --pod-max-pids="-1" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146091 4580 flags.go:64] FLAG: --pods-per-core="0" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146094 4580 flags.go:64] FLAG: --port="10250" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146098 4580 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146121 4580 flags.go:64] FLAG: --provider-id="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146125 4580 flags.go:64] FLAG: --qos-reserved="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146129 4580 flags.go:64] FLAG: --read-only-port="10255" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146133 4580 flags.go:64] FLAG: --register-node="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146137 4580 flags.go:64] FLAG: --register-schedulable="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146140 4580 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146147 4580 flags.go:64] FLAG: --registry-burst="10" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146151 4580 flags.go:64] FLAG: --registry-qps="5" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146154 4580 flags.go:64] FLAG: --reserved-cpus="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146159 4580 flags.go:64] FLAG: --reserved-memory="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146164 4580 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146168 4580 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146172 4580 flags.go:64] FLAG: --rotate-certificates="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146175 4580 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146179 4580 flags.go:64] FLAG: --runonce="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146183 4580 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146186 4580 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146194 4580 flags.go:64] FLAG: --seccomp-default="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146198 4580 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146202 4580 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146206 4580 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146209 4580 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146214 4580 flags.go:64] FLAG: --storage-driver-password="root" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146217 4580 flags.go:64] FLAG: --storage-driver-secure="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146221 4580 flags.go:64] FLAG: --storage-driver-table="stats" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146225 4580 flags.go:64] FLAG: --storage-driver-user="root" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146230 4580 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146234 4580 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146238 4580 flags.go:64] FLAG: --system-cgroups="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146242 4580 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146249 4580 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146252 4580 flags.go:64] FLAG: --tls-cert-file="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146256 4580 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146261 4580 flags.go:64] FLAG: --tls-min-version="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146264 4580 flags.go:64] FLAG: --tls-private-key-file="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146268 4580 flags.go:64] FLAG: --topology-manager-policy="none" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146272 4580 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146276 4580 flags.go:64] FLAG: --topology-manager-scope="container" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146280 4580 flags.go:64] FLAG: --v="2" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146285 4580 flags.go:64] FLAG: --version="false" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146289 4580 flags.go:64] FLAG: --vmodule="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146294 4580 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146297 4580 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146403 4580 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146408 4580 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146412 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146416 4580 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146419 4580 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146423 4580 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146430 4580 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146433 4580 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146437 4580 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146440 4580 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146444 4580 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146448 4580 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146451 4580 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146455 4580 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146458 4580 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146461 4580 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146465 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146468 4580 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146472 4580 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146476 4580 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146480 4580 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146484 4580 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146488 4580 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146494 4580 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146499 4580 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146503 4580 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146508 4580 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146512 4580 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146515 4580 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146518 4580 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146522 4580 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146526 4580 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146529 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146532 4580 feature_gate.go:330] unrecognized feature gate: Example Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146535 4580 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146538 4580 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146541 4580 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146545 4580 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146549 4580 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146553 4580 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146556 4580 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146559 4580 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146562 4580 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146565 4580 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146569 4580 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146573 4580 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146577 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146581 4580 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146584 4580 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146587 4580 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146591 4580 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146594 4580 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146597 4580 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146600 4580 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146604 4580 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146608 4580 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146612 4580 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146615 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146618 4580 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146621 4580 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146625 4580 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146630 4580 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146634 4580 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146639 4580 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146642 4580 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146646 4580 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146649 4580 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146653 4580 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146657 4580 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146660 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.146664 4580 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.146670 4580 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.155628 4580 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.155662 4580 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155735 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155751 4580 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155756 4580 feature_gate.go:330] unrecognized feature gate: Example Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155760 4580 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155764 4580 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155768 4580 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155774 4580 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155779 4580 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155784 4580 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155788 4580 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155794 4580 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155799 4580 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155804 4580 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155808 4580 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155812 4580 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155817 4580 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155822 4580 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155826 4580 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155830 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155833 4580 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155837 4580 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155841 4580 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155844 4580 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155848 4580 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155852 4580 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155855 4580 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155859 4580 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155863 4580 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155867 4580 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155870 4580 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155874 4580 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155877 4580 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155881 4580 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155884 4580 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155889 4580 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155893 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155897 4580 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155901 4580 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155905 4580 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155908 4580 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155913 4580 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155917 4580 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155923 4580 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155928 4580 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155933 4580 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155937 4580 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155941 4580 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155946 4580 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155950 4580 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155954 4580 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155958 4580 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155961 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155964 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155968 4580 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155971 4580 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155974 4580 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155977 4580 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155981 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155984 4580 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155987 4580 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155990 4580 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155993 4580 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.155997 4580 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156000 4580 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156003 4580 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156006 4580 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156009 4580 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156013 4580 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156017 4580 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156021 4580 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156026 4580 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.156032 4580 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156174 4580 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156182 4580 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156186 4580 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156191 4580 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156195 4580 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156198 4580 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156202 4580 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156205 4580 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156210 4580 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156213 4580 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156217 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156220 4580 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156224 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156227 4580 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156231 4580 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156234 4580 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156238 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156241 4580 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156245 4580 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156248 4580 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156252 4580 feature_gate.go:330] unrecognized feature gate: Example Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156256 4580 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156259 4580 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156264 4580 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156268 4580 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156272 4580 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156275 4580 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156282 4580 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156286 4580 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156290 4580 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156293 4580 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156297 4580 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156301 4580 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156306 4580 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156311 4580 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156315 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156320 4580 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156324 4580 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156328 4580 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156333 4580 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156337 4580 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156341 4580 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156345 4580 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156348 4580 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156352 4580 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156355 4580 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156358 4580 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156361 4580 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156365 4580 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156368 4580 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156372 4580 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156375 4580 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156379 4580 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156382 4580 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156391 4580 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156395 4580 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156398 4580 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156403 4580 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156407 4580 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156411 4580 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156415 4580 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156419 4580 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156422 4580 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156426 4580 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156429 4580 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156433 4580 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156436 4580 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156440 4580 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156443 4580 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156447 4580 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.156450 4580 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.156456 4580 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.156613 4580 server.go:940] "Client rotation is on, will bootstrap in background" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.159356 4580 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.159436 4580 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.161366 4580 server.go:997] "Starting client certificate rotation" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.161400 4580 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.161630 4580 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-26 22:39:34.337795291 +0000 UTC Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.161741 4580 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.172759 4580 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.175003 4580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.175312 4580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.185607 4580 log.go:25] "Validated CRI v1 runtime API" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.202047 4580 log.go:25] "Validated CRI v1 image API" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.203824 4580 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.207098 4580 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-12-13-03-18-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.207163 4580 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/94b752e0a51c0134b00ddef6dc7a933a9d7c1d9bdc88a18dae4192a0d557d623/merged major:0 minor:43 fsType:overlay blockSize:0}] Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.221550 4580 manager.go:217] Machine: {Timestamp:2026-01-12 13:06:41.219762632 +0000 UTC m=+0.263981332 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2445406 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f50d9485-f990-498d-a5ee-4bb4dd1663df BootID:0b4cb507-f154-474c-bea1-057456e7be91 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:65536000 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fd:52:df Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:enp3s0 MacAddress:fa:16:3e:fd:52:df Speed:-1 Mtu:1500} {Name:enp7s0 MacAddress:fa:16:3e:cf:74:45 Speed:-1 Mtu:1440} {Name:enp7s0.20 MacAddress:52:54:00:63:86:4c Speed:-1 Mtu:1436} {Name:enp7s0.21 MacAddress:52:54:00:89:57:c8 Speed:-1 Mtu:1436} {Name:enp7s0.22 MacAddress:52:54:00:a0:a4:f6 Speed:-1 Mtu:1436} {Name:eth10 MacAddress:26:65:ea:7f:69:b1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:a0:99:e9:23:69 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:65536 Type:Data Level:1} {Id:10 Size:65536 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:65536 Type:Data Level:1} {Id:11 Size:65536 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:65536 Type:Data Level:1} {Id:6 Size:65536 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:65536 Type:Data Level:1} {Id:7 Size:65536 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:65536 Type:Data Level:1} {Id:8 Size:65536 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:65536 Type:Data Level:1} {Id:9 Size:65536 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.221763 4580 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.221877 4580 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.222697 4580 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.222954 4580 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.223033 4580 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.223331 4580 topology_manager.go:138] "Creating topology manager with none policy" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.223394 4580 container_manager_linux.go:303] "Creating device plugin manager" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.223693 4580 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.223765 4580 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.223968 4580 state_mem.go:36] "Initialized new in-memory state store" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.224378 4580 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.226076 4580 kubelet.go:418] "Attempting to sync node with API server" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.226165 4580 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.226273 4580 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.226349 4580 kubelet.go:324] "Adding apiserver pod source" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.226411 4580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.228851 4580 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.229193 4580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.229257 4580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.229495 4580 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.229442 4580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.229642 4580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.231188 4580 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232176 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232249 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232304 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232347 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232403 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232475 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232522 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232566 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232610 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232656 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232713 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.232754 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.233205 4580 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.233722 4580 server.go:1280] "Started kubelet" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.234479 4580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.234596 4580 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:41 crc systemd[1]: Started Kubernetes Kubelet. Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.235009 4580 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.235048 4580 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.235864 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.235894 4580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.236220 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:22:06.555476649 +0000 UTC Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.238619 4580 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.238971 4580 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.238989 4580 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.239198 4580 factory.go:55] Registering systemd factory Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.239510 4580 factory.go:221] Registration of the systemd container factory successfully Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.239818 4580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.239871 4580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.241185 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="200ms" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.241868 4580 factory.go:153] Registering CRI-O factory Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.241882 4580 factory.go:221] Registration of the crio container factory successfully Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.239436 4580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.25.161:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1889fda2112367b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-12 13:06:41.233692596 +0000 UTC m=+0.277911286,LastTimestamp:2026-01-12 13:06:41.233692596 +0000 UTC m=+0.277911286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.242552 4580 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.242897 4580 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.242933 4580 factory.go:103] Registering Raw factory Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.242952 4580 manager.go:1196] Started watching for new ooms in manager Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.244678 4580 manager.go:319] Starting recovery of all containers Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.245629 4580 server.go:460] "Adding debug handlers to kubelet server" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.250884 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251022 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251083 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251159 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251221 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251280 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251339 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251403 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251462 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251524 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251575 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251629 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251879 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251931 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.251978 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252027 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252079 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252154 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252206 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252256 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252314 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252362 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252436 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252494 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252551 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252602 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252661 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252721 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.252970 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253026 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253081 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253157 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253218 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253278 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253330 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253381 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253448 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253502 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253599 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253659 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253712 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253761 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253808 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253855 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253907 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.253958 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.254013 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.254063 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.254158 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.254746 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.254806 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.254855 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.254924 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.254980 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255039 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255089 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255162 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255230 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255281 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255328 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255394 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255446 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255502 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255550 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255598 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255655 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255710 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255760 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255812 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255863 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255917 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.255965 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256014 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256063 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256141 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256194 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256255 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256310 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256361 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256465 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256518 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256534 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256550 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256563 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256577 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256591 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256608 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256622 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256635 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256651 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256663 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256676 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256687 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256700 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256717 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256730 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256745 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256759 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256770 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256781 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256794 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256807 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256820 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256832 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256863 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256882 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256903 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256918 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256936 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256950 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256964 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256980 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.256994 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257006 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257017 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257027 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257038 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257051 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257061 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257072 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257082 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257094 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257120 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257131 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257143 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257157 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257170 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257183 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257198 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257211 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257222 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257235 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257246 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257256 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257268 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257280 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257293 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257303 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257315 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257329 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257340 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257355 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257367 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257382 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257405 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257414 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257426 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257437 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257446 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257458 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257469 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257481 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257492 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257503 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257512 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257521 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257531 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257553 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257565 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257576 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257590 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257600 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257612 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257623 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257635 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257650 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257661 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257672 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257685 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257697 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257709 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257720 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257734 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257744 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257755 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257766 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.257781 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265008 4580 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265127 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265157 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265190 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265215 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265241 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265257 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265276 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265297 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265319 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265338 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265361 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265376 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265408 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265426 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265437 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265462 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265486 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265506 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265522 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265541 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265567 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265592 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265633 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265672 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265721 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265764 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265797 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265833 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265874 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265900 4580 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265920 4580 reconstruct.go:97] "Volume reconstruction finished" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.265938 4580 reconciler.go:26] "Reconciler: start to sync state" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.270263 4580 manager.go:324] Recovery completed Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.278435 4580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.278970 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.280227 4580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.280268 4580 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.280448 4580 kubelet.go:2335] "Starting kubelet main sync loop" Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.280491 4580 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.280376 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.280546 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.280557 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.280919 4580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.280981 4580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.281664 4580 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.281684 4580 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.281699 4580 state_mem.go:36] "Initialized new in-memory state store" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.285222 4580 policy_none.go:49] "None policy: Start" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.285725 4580 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.285758 4580 state_mem.go:35] "Initializing new in-memory state store" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.335942 4580 manager.go:334] "Starting Device Plugin manager" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.336237 4580 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.336251 4580 server.go:79] "Starting device plugin registration server" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.336528 4580 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.336546 4580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.336931 4580 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.337007 4580 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.337019 4580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.344959 4580 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.380809 4580 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.380982 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.382547 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.382583 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.382595 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.382738 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.383332 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.383377 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.383512 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.383564 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.383577 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.383815 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384012 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384076 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384308 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384357 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384375 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384583 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384622 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384634 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384803 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384936 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.384986 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.385595 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.385621 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.385633 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.385834 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.385884 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.385896 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.386305 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.386424 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.386455 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.386570 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.386597 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.386605 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.387138 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.387160 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.387147 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.387172 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.387202 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.387266 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.387526 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.387566 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.388413 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.388445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.388454 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.436993 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.437716 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.437775 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.437792 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.437816 4580 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.438377 4580 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.161:6443: connect: connection refused" node="crc" Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.441859 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="400ms" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.467901 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.467964 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.467994 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468045 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468079 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468097 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468129 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468150 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468182 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468206 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468225 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468307 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468436 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468511 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.468568 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569553 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569591 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569614 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569640 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569656 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569673 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569690 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569708 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569726 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569747 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569765 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569781 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569797 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569813 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.569833 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570035 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570083 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570129 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570153 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570178 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570200 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570217 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570238 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570258 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570279 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570297 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570318 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570339 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570360 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.570381 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.638740 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.639625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.639677 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.639696 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.639761 4580 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.640130 4580 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.161:6443: connect: connection refused" node="crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.711777 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.727693 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.730907 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-df307f9e4f32a17c7b6def22c116896674fc0c8ec230688c7741c938a3976765 WatchSource:0}: Error finding container df307f9e4f32a17c7b6def22c116896674fc0c8ec230688c7741c938a3976765: Status 404 returned error can't find the container with id df307f9e4f32a17c7b6def22c116896674fc0c8ec230688c7741c938a3976765 Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.732959 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.745463 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7cb9a54d3a079b323eab59fd6aaf3cf35bfdf11e30467a99f0d6a33423024d5e WatchSource:0}: Error finding container 7cb9a54d3a079b323eab59fd6aaf3cf35bfdf11e30467a99f0d6a33423024d5e: Status 404 returned error can't find the container with id 7cb9a54d3a079b323eab59fd6aaf3cf35bfdf11e30467a99f0d6a33423024d5e Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.746924 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: I0112 13:06:41.752229 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.757117 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-df568ae6ab79f9c5edebb80fc378ee75194ea12ef06d7a64e114605b94fed6f4 WatchSource:0}: Error finding container df568ae6ab79f9c5edebb80fc378ee75194ea12ef06d7a64e114605b94fed6f4: Status 404 returned error can't find the container with id df568ae6ab79f9c5edebb80fc378ee75194ea12ef06d7a64e114605b94fed6f4 Jan 12 13:06:41 crc kubenswrapper[4580]: W0112 13:06:41.769301 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5bd874c7cab30a72d49e88e0ca953ba6e624e2f23e536cd5e932a63f9a581452 WatchSource:0}: Error finding container 5bd874c7cab30a72d49e88e0ca953ba6e624e2f23e536cd5e932a63f9a581452: Status 404 returned error can't find the container with id 5bd874c7cab30a72d49e88e0ca953ba6e624e2f23e536cd5e932a63f9a581452 Jan 12 13:06:41 crc kubenswrapper[4580]: E0112 13:06:41.843121 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="800ms" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.040234 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.041616 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.041647 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.041657 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.041678 4580 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 12 13:06:42 crc kubenswrapper[4580]: E0112 13:06:42.042042 4580 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.161:6443: connect: connection refused" node="crc" Jan 12 13:06:42 crc kubenswrapper[4580]: W0112 13:06:42.150017 4580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:42 crc kubenswrapper[4580]: E0112 13:06:42.150127 4580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:42 crc kubenswrapper[4580]: W0112 13:06:42.232650 4580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:42 crc kubenswrapper[4580]: E0112 13:06:42.233030 4580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.235693 4580 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.236625 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:52:19.045400138 +0000 UTC Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.288010 4580 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe" exitCode=0 Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.288088 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.288194 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7cb9a54d3a079b323eab59fd6aaf3cf35bfdf11e30467a99f0d6a33423024d5e"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.288312 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.289616 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.289661 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.289675 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.289952 4580 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f66cd35f34bfe89fb3152f0fbd65fc1dac84795ef724fb2b38ea49da1455c5d1" exitCode=0 Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.290027 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f66cd35f34bfe89fb3152f0fbd65fc1dac84795ef724fb2b38ea49da1455c5d1"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.290059 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"df307f9e4f32a17c7b6def22c116896674fc0c8ec230688c7741c938a3976765"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.290182 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.291006 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.291037 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.291049 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.291280 4580 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224" exitCode=0 Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.291320 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.291339 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5bd874c7cab30a72d49e88e0ca953ba6e624e2f23e536cd5e932a63f9a581452"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.291426 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.292145 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.292175 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.292184 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.293163 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.293199 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df568ae6ab79f9c5edebb80fc378ee75194ea12ef06d7a64e114605b94fed6f4"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.294840 4580 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23" exitCode=0 Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.294885 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.294906 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b566313717176796b8edbe68d8ff3cfe27d0209c33daaff44adb6838581bc826"} Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.295048 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.295807 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.295827 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.295836 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.297230 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.297952 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.297985 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.298000 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:42 crc kubenswrapper[4580]: W0112 13:06:42.384285 4580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:42 crc kubenswrapper[4580]: E0112 13:06:42.384360 4580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:42 crc kubenswrapper[4580]: E0112 13:06:42.644787 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="1.6s" Jan 12 13:06:42 crc kubenswrapper[4580]: W0112 13:06:42.688768 4580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.161:6443: connect: connection refused Jan 12 13:06:42 crc kubenswrapper[4580]: E0112 13:06:42.688887 4580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.161:6443: connect: connection refused" logger="UnhandledError" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.842864 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.844142 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.844185 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.844195 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:42 crc kubenswrapper[4580]: I0112 13:06:42.844224 4580 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 12 13:06:42 crc kubenswrapper[4580]: E0112 13:06:42.844672 4580 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.161:6443: connect: connection refused" node="crc" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.237739 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:37:08.096211462 +0000 UTC Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.302131 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.302172 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.302181 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.302272 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.303047 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.303072 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.303082 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.305275 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.305320 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.305333 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.305344 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.305356 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.305479 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.306152 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.306176 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.306184 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.306597 4580 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3" exitCode=0 Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.306647 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.306725 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.307343 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.307362 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.307370 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.309724 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dc9529b959d5f791fccd83f001f142471328c468307cab794dfa65420bd9c2a3"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.309770 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.310670 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.310690 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.310698 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.313523 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.313544 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.313554 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2"} Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.313606 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.314032 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.314050 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.314058 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.321563 4580 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.343016 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.349500 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:43 crc kubenswrapper[4580]: I0112 13:06:43.751454 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.226554 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.238517 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:36:08.313745847 +0000 UTC Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.320364 4580 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58" exitCode=0 Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.320477 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58"} Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.320568 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.320664 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.320781 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.321981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.321993 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.322011 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.322017 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.322029 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.322020 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.322876 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.322921 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.322934 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.445632 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.446384 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.446419 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.446430 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.446452 4580 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 12 13:06:44 crc kubenswrapper[4580]: I0112 13:06:44.975998 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.129335 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.239199 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:16:07.583255407 +0000 UTC Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326144 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65"} Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326199 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0"} Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326214 4580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326241 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326251 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326313 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326219 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa"} Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326848 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce"} Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.326915 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955"} Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327206 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327231 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327241 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327209 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327386 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327209 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327552 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.327564 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:45 crc kubenswrapper[4580]: I0112 13:06:45.848918 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.240333 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:25:08.905814399 +0000 UTC Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.328645 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.328682 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.328959 4580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.329070 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.329601 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.329659 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.329672 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.330054 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.330088 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.330117 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.330326 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.330418 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:46 crc kubenswrapper[4580]: I0112 13:06:46.330495 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:47 crc kubenswrapper[4580]: I0112 13:06:47.240485 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:04:46.887139204 +0000 UTC Jan 12 13:06:47 crc kubenswrapper[4580]: I0112 13:06:47.331000 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:47 crc kubenswrapper[4580]: I0112 13:06:47.334902 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:47 crc kubenswrapper[4580]: I0112 13:06:47.334952 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:47 crc kubenswrapper[4580]: I0112 13:06:47.334965 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:48 crc kubenswrapper[4580]: I0112 13:06:48.130043 4580 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 12 13:06:48 crc kubenswrapper[4580]: I0112 13:06:48.130175 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 12 13:06:48 crc kubenswrapper[4580]: I0112 13:06:48.241800 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:02:27.386026184 +0000 UTC Jan 12 13:06:49 crc kubenswrapper[4580]: I0112 13:06:49.242349 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:13:42.534500628 +0000 UTC Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.244079 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:12:55.71151552 +0000 UTC Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.564939 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.565086 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.566099 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.566165 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.566176 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.748967 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.749081 4580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.749121 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.750035 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.750096 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:50 crc kubenswrapper[4580]: I0112 13:06:50.750124 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:51 crc kubenswrapper[4580]: I0112 13:06:51.230157 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:06:51 crc kubenswrapper[4580]: I0112 13:06:51.245207 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:34:22.4854149 +0000 UTC Jan 12 13:06:51 crc kubenswrapper[4580]: I0112 13:06:51.246477 4580 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 22h27m31.238971517s for next certificate rotation Jan 12 13:06:51 crc kubenswrapper[4580]: I0112 13:06:51.339701 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:51 crc kubenswrapper[4580]: I0112 13:06:51.341060 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:51 crc kubenswrapper[4580]: I0112 13:06:51.341142 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:51 crc kubenswrapper[4580]: I0112 13:06:51.341201 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:51 crc kubenswrapper[4580]: E0112 13:06:51.345028 4580 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.235783 4580 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 12 13:06:53 crc kubenswrapper[4580]: E0112 13:06:53.323255 4580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.433133 4580 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.433388 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.436089 4580 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.436159 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.607854 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.608165 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.609528 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.609575 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:53 crc kubenswrapper[4580]: I0112 13:06:53.609588 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:54 crc kubenswrapper[4580]: I0112 13:06:54.982033 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:54 crc kubenswrapper[4580]: I0112 13:06:54.982217 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:54 crc kubenswrapper[4580]: I0112 13:06:54.983206 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:54 crc kubenswrapper[4580]: I0112 13:06:54.983244 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:54 crc kubenswrapper[4580]: I0112 13:06:54.983253 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:54 crc kubenswrapper[4580]: I0112 13:06:54.985005 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:06:55 crc kubenswrapper[4580]: I0112 13:06:55.347476 4580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 12 13:06:55 crc kubenswrapper[4580]: I0112 13:06:55.347525 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:06:55 crc kubenswrapper[4580]: I0112 13:06:55.348308 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:06:55 crc kubenswrapper[4580]: I0112 13:06:55.348353 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:06:55 crc kubenswrapper[4580]: I0112 13:06:55.348366 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:06:57 crc kubenswrapper[4580]: I0112 13:06:57.720267 4580 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 12 13:06:57 crc kubenswrapper[4580]: I0112 13:06:57.731259 4580 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.028759 4580 csr.go:261] certificate signing request csr-7k5sr is approved, waiting to be issued Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.033386 4580 csr.go:257] certificate signing request csr-7k5sr is issued Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.131061 4580 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.131177 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 12 13:06:58 crc kubenswrapper[4580]: E0112 13:06:58.440337 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.441632 4580 trace.go:236] Trace[1889503978]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Jan-2026 13:06:43.903) (total time: 14538ms): Jan 12 13:06:58 crc kubenswrapper[4580]: Trace[1889503978]: ---"Objects listed" error: 14538ms (13:06:58.441) Jan 12 13:06:58 crc kubenswrapper[4580]: Trace[1889503978]: [14.538290818s] [14.538290818s] END Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.441650 4580 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.441870 4580 trace.go:236] Trace[1501670047]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Jan-2026 13:06:45.107) (total time: 13334ms): Jan 12 13:06:58 crc kubenswrapper[4580]: Trace[1501670047]: ---"Objects listed" error: 13334ms (13:06:58.441) Jan 12 13:06:58 crc kubenswrapper[4580]: Trace[1501670047]: [13.334186106s] [13.334186106s] END Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.441884 4580 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.442735 4580 trace.go:236] Trace[1510606918]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Jan-2026 13:06:44.279) (total time: 14162ms): Jan 12 13:06:58 crc kubenswrapper[4580]: Trace[1510606918]: ---"Objects listed" error: 14162ms (13:06:58.442) Jan 12 13:06:58 crc kubenswrapper[4580]: Trace[1510606918]: [14.162971371s] [14.162971371s] END Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.442766 4580 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.443242 4580 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 12 13:06:58 crc kubenswrapper[4580]: E0112 13:06:58.443726 4580 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.444453 4580 trace.go:236] Trace[1644184225]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Jan-2026 13:06:45.056) (total time: 13388ms): Jan 12 13:06:58 crc kubenswrapper[4580]: Trace[1644184225]: ---"Objects listed" error: 13388ms (13:06:58.444) Jan 12 13:06:58 crc kubenswrapper[4580]: Trace[1644184225]: [13.38825417s] [13.38825417s] END Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.444476 4580 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.649478 4580 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40732->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.649545 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40732->192.168.126.11:17697: read: connection reset by peer" Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.649785 4580 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 12 13:06:58 crc kubenswrapper[4580]: I0112 13:06:58.649823 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.034743 4580 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-12 13:01:58 +0000 UTC, rotation deadline is 2026-10-05 23:59:15.682822712 +0000 UTC Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.035147 4580 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6394h52m16.647679756s for next certificate rotation Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.065700 4580 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.065763 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.237627 4580 apiserver.go:52] "Watching apiserver" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.240074 4580 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.240380 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.240775 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.240833 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.240919 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.240937 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.240938 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.241155 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.241237 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.241263 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.241294 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.242726 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.242733 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.242730 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.242817 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.243169 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.243176 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.243201 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.243217 4580 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.244490 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.244745 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247221 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247252 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247275 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247291 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247307 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247323 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247343 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247359 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247372 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247387 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247402 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247416 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247433 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247459 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247473 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247499 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247514 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247530 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247547 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247564 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247578 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247593 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247608 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247622 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247636 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247663 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247678 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247694 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247710 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247724 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247738 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247755 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247780 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247796 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247813 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247829 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247844 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247859 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247875 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247889 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247911 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247925 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247939 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247953 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247970 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.247986 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248001 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248015 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248031 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248045 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248065 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248080 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248131 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248148 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248162 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248177 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248193 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248208 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248226 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248242 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248257 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248272 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248285 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248300 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248315 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248332 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248347 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248363 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248379 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248392 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248405 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248420 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248433 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248447 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248469 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248490 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248504 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248519 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248534 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248556 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248570 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248584 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248598 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248612 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248627 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248642 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248656 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248669 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248688 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248702 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248719 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248738 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248754 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248768 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248782 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248797 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248812 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248828 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248844 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248859 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248874 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248889 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248919 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248934 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248951 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248965 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248982 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248998 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249013 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249027 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249041 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249057 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249071 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249087 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249113 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249127 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249144 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249158 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249172 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249187 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249203 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249216 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249232 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249247 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249265 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249278 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249293 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249310 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249326 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249341 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249357 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249374 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249390 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249405 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249421 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249438 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249459 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249477 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249502 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249517 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249533 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249548 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249563 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249582 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249598 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249615 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249631 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249649 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249667 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249682 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249701 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249717 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249735 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249750 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249765 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249780 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249794 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249811 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249916 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249937 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249960 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249977 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249992 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250010 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250026 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250040 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250059 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250075 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250091 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250122 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250141 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250158 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250174 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250189 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250206 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250221 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250237 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250253 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250269 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250285 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250300 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250316 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250331 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250347 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250363 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250381 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250400 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250416 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250433 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250450 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250465 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250480 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250506 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250521 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250536 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250587 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250611 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250629 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250646 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250665 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250684 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250700 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250719 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250738 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250754 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250773 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250788 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250805 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250823 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.271004 4580 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248696 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.248839 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249458 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249603 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.249875 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250078 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250404 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250550 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250684 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250804 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.250929 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.251045 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.251456 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.251632 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.251843 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.251952 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.252071 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.252202 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.252387 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.252670 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.252783 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.276639 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.252894 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.252999 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.253881 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254076 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254093 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254091 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254084 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254285 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254332 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254348 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254557 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254601 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254696 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254849 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254930 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254923 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254951 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.254966 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.255116 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.255308 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258270 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258302 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258323 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258335 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258351 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258546 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258568 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258652 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258676 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258700 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258716 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258772 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.258826 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.259042 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.259444 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.276891 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.259652 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.259667 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.259722 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.259910 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.259936 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.259996 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260011 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260054 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260213 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260212 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260250 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260284 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260309 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260323 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260333 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260374 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260457 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260462 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260615 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260649 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260744 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260770 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260779 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260835 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.260934 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261017 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261180 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261197 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261232 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261242 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261252 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261298 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261525 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261543 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261574 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261679 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261828 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261861 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.261868 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.262270 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.262574 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.262618 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.262706 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.262886 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.263210 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.263233 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.263352 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.263377 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.263404 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.263550 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.263581 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.263660 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267175 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267277 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267419 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267455 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267479 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267513 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267578 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267607 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267716 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.267909 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268022 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268033 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268070 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268151 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268374 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268411 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268503 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268580 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268637 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268753 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268816 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.268854 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.269069 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.269092 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.269691 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.269872 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.274428 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.274540 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:06:59.774518723 +0000 UTC m=+18.818737414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.274716 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.274743 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.274798 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.274863 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.274878 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.274923 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.275033 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.275235 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.275387 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.275407 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.275603 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.275638 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.275806 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.275993 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.276061 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.276187 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.276234 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.276399 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.276436 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.277134 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.277276 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.277424 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.277570 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.277718 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.277963 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.278538 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.278617 4580 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.278685 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:06:59.778663041 +0000 UTC m=+18.822881731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.278893 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.278979 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.279129 4580 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.279188 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:06:59.779172811 +0000 UTC m=+18.823391501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.279232 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.279386 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.279444 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.279661 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.281016 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.281802 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.281881 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.281886 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.282000 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.285255 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.288217 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.289123 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.289503 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.289758 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.289945 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.290206 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.291132 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.294261 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.295407 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.296962 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.300236 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.301442 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.303124 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.303251 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.305419 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.305450 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.305481 4580 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.305557 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-12 13:06:59.805530671 +0000 UTC m=+18.849749362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.305591 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.305771 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.305843 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.305890 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.305907 4580 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.305968 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-12 13:06:59.805951243 +0000 UTC m=+18.850169933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.309527 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.309922 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.316025 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.318207 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.319206 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.319930 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.320892 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.322173 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.335319 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.335455 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.338272 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.338849 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.340979 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.341715 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.341732 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.342296 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.343656 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.346937 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352511 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352550 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352616 4580 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352628 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352637 4580 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352646 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352655 4580 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352664 4580 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352676 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352683 4580 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352692 4580 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352699 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352708 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352717 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352725 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352733 4580 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352741 4580 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352748 4580 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352756 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352766 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352775 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352784 4580 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352791 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352798 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352807 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352815 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352825 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352834 4580 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352843 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352852 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352861 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352869 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352879 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352887 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352895 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352903 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352911 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352919 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352927 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352935 4580 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352944 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352951 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352959 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352967 4580 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352974 4580 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352983 4580 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352990 4580 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.352998 4580 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353007 4580 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353017 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353031 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353039 4580 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353046 4580 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353055 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353064 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353072 4580 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353080 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353089 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353111 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353120 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353128 4580 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353135 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353143 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353151 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353159 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353168 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353176 4580 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353184 4580 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353192 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353200 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353208 4580 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353217 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353225 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353235 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353242 4580 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353251 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353259 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353267 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353275 4580 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353283 4580 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353291 4580 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353298 4580 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353305 4580 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353312 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353320 4580 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353331 4580 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353351 4580 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353362 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353421 4580 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353440 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353592 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353769 4580 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353788 4580 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353798 4580 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353809 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353820 4580 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353828 4580 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353837 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353837 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353847 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353894 4580 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353918 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353934 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353944 4580 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353954 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353964 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353972 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353980 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353988 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.353996 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354004 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354011 4580 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354020 4580 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354028 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354036 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354045 4580 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354054 4580 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354062 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354072 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354080 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354087 4580 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354095 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354119 4580 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354127 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354135 4580 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354142 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354150 4580 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354157 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354164 4580 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354162 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354172 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354910 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354922 4580 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354932 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354940 4580 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354948 4580 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354958 4580 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354965 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354978 4580 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354986 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.354995 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355003 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355011 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355019 4580 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355028 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355035 4580 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355043 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355050 4580 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355057 4580 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355065 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355073 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355080 4580 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355088 4580 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355095 4580 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355118 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355125 4580 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355134 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355135 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355142 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355239 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355251 4580 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355261 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355271 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355280 4580 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355288 4580 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355298 4580 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355306 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355316 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355325 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355334 4580 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355342 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355351 4580 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355359 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355388 4580 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355396 4580 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355407 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355416 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355425 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355434 4580 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355444 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355452 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355463 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355471 4580 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355479 4580 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355496 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355504 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355513 4580 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355522 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355530 4580 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355540 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355549 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355557 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355566 4580 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355574 4580 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355582 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355591 4580 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.355764 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.356131 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.356334 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.357141 4580 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.358265 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.358740 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.359877 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.364529 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.364993 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.365581 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.366078 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.367649 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.370567 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.370914 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.371070 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.372499 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.373179 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.373460 4580 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68" exitCode=255 Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.373852 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.374788 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.374908 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.375332 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.376085 4580 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.376198 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.377842 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.378668 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.379046 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.381721 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.382130 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.382360 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.383183 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.383757 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.384679 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.385117 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.386445 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.388879 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.389461 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.389887 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.390762 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.393140 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.393828 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.394266 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.395057 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.395505 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.395538 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.397033 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.397736 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.398171 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.399135 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8ch98"] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.399340 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68"} Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.400150 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8ch98" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.402362 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.402477 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.403678 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.405961 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.423359 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.437425 4580 scope.go:117] "RemoveContainer" containerID="e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.438194 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.440156 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.453250 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.458506 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4f20fb33-a98a-4b04-81b9-5ea16ae9f57c-hosts-file\") pod \"node-resolver-8ch98\" (UID: \"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\") " pod="openshift-dns/node-resolver-8ch98" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.458570 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nbmf\" (UniqueName: \"kubernetes.io/projected/4f20fb33-a98a-4b04-81b9-5ea16ae9f57c-kube-api-access-5nbmf\") pod \"node-resolver-8ch98\" (UID: \"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\") " pod="openshift-dns/node-resolver-8ch98" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.458592 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.458603 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.458612 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.458620 4580 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.472966 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.482005 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.491293 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.502555 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.512769 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.520282 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.527459 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.535530 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.542977 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.549560 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.558905 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nbmf\" (UniqueName: \"kubernetes.io/projected/4f20fb33-a98a-4b04-81b9-5ea16ae9f57c-kube-api-access-5nbmf\") pod \"node-resolver-8ch98\" (UID: \"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\") " pod="openshift-dns/node-resolver-8ch98" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.558950 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4f20fb33-a98a-4b04-81b9-5ea16ae9f57c-hosts-file\") pod \"node-resolver-8ch98\" (UID: \"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\") " pod="openshift-dns/node-resolver-8ch98" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.559090 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4f20fb33-a98a-4b04-81b9-5ea16ae9f57c-hosts-file\") pod \"node-resolver-8ch98\" (UID: \"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\") " pod="openshift-dns/node-resolver-8ch98" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.569401 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.571510 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nbmf\" (UniqueName: \"kubernetes.io/projected/4f20fb33-a98a-4b04-81b9-5ea16ae9f57c-kube-api-access-5nbmf\") pod \"node-resolver-8ch98\" (UID: \"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\") " pod="openshift-dns/node-resolver-8ch98" Jan 12 13:06:59 crc kubenswrapper[4580]: W0112 13:06:59.579713 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-93c255ac7c03a72e970cea0fadc8dcafb7989822fe8e178635d13948467e5d0d WatchSource:0}: Error finding container 93c255ac7c03a72e970cea0fadc8dcafb7989822fe8e178635d13948467e5d0d: Status 404 returned error can't find the container with id 93c255ac7c03a72e970cea0fadc8dcafb7989822fe8e178635d13948467e5d0d Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.580406 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 12 13:06:59 crc kubenswrapper[4580]: W0112 13:06:59.594149 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-777508ab0df5473805ec9cfa5a443509e9ac9fcf72869005313e1247f4b87ae7 WatchSource:0}: Error finding container 777508ab0df5473805ec9cfa5a443509e9ac9fcf72869005313e1247f4b87ae7: Status 404 returned error can't find the container with id 777508ab0df5473805ec9cfa5a443509e9ac9fcf72869005313e1247f4b87ae7 Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.636283 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.700613 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hdz6l"] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.700933 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.701597 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-nnz5s"] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.701844 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: W0112 13:06:59.702217 4580 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.702248 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.702439 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.702635 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.702871 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.703053 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hn77p"] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.703218 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.703376 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.703531 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2p6r8"] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.703561 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.703661 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.703912 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.704232 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.706469 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.706639 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.706774 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.706844 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.706963 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.707050 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.707074 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.707053 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.707280 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.707426 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.708638 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.712947 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8ch98" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.713404 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.724399 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.732730 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.740664 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.749536 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.759798 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760450 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-ovn-kubernetes\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760515 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-netns\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760540 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-var-lib-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760560 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-etc-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760581 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-bin\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760603 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-config\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760624 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2223aac-784e-4653-8939-fcbd18c70ba7-cni-binary-copy\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760647 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-daemon-config\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760669 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-k8s-cni-cncf-io\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760689 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-kubelet\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760710 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-multus-certs\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760735 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmh5\" (UniqueName: \"kubernetes.io/projected/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-kube-api-access-whmh5\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760757 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-cni-bin\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760777 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-conf-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760797 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-os-release\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760820 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-netns\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760843 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d2223aac-784e-4653-8939-fcbd18c70ba7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760866 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-rootfs\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760901 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8f39bcc-5a25-4746-988b-2251fd1be8c9-cni-binary-copy\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760926 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-log-socket\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.760951 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-os-release\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761013 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761187 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-socket-dir-parent\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761211 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-cni-multus\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761228 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-etc-kubernetes\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761254 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m82m\" (UniqueName: \"kubernetes.io/projected/c8f39bcc-5a25-4746-988b-2251fd1be8c9-kube-api-access-5m82m\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761274 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-ovn\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761312 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-env-overrides\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761335 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-mcd-auth-proxy-config\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761350 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-hostroot\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761372 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wmg\" (UniqueName: \"kubernetes.io/projected/fd4e0810-eddb-47f5-a7dc-beed7b545112-kube-api-access-k4wmg\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761412 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-cnibin\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761434 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-systemd\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761449 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-netd\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761470 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761503 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-systemd-units\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761527 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761548 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-script-lib\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761565 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-slash\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761582 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-node-log\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761614 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovn-node-metrics-cert\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761633 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-system-cni-dir\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761651 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-system-cni-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761721 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrjr\" (UniqueName: \"kubernetes.io/projected/d2223aac-784e-4653-8939-fcbd18c70ba7-kube-api-access-hcrjr\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761837 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-kubelet\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761874 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-cnibin\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761899 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-cni-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.761922 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-proxy-tls\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.766723 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.772270 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.779718 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.790061 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.799987 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.807668 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.821357 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.828510 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.836768 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.846474 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.853557 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.860421 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863152 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863274 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-cni-bin\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863309 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-conf-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863335 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-os-release\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863363 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8f39bcc-5a25-4746-988b-2251fd1be8c9-cni-binary-copy\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863379 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-netns\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863399 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d2223aac-784e-4653-8939-fcbd18c70ba7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863420 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-rootfs\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863445 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863466 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-log-socket\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863483 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-os-release\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863517 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863536 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863557 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863584 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-socket-dir-parent\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863607 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-cni-multus\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863626 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-etc-kubernetes\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863650 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m82m\" (UniqueName: \"kubernetes.io/projected/c8f39bcc-5a25-4746-988b-2251fd1be8c9-kube-api-access-5m82m\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863687 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-ovn\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863711 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-mcd-auth-proxy-config\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863734 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-env-overrides\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863758 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863787 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-hostroot\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863807 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wmg\" (UniqueName: \"kubernetes.io/projected/fd4e0810-eddb-47f5-a7dc-beed7b545112-kube-api-access-k4wmg\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863827 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-cnibin\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863849 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-systemd\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863871 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-netd\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863894 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863917 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-systemd-units\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863939 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863959 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-script-lib\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.863982 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-system-cni-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864003 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-slash\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864027 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-node-log\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864048 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovn-node-metrics-cert\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864072 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-system-cni-dir\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864095 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-cnibin\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864146 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcrjr\" (UniqueName: \"kubernetes.io/projected/d2223aac-784e-4653-8939-fcbd18c70ba7-kube-api-access-hcrjr\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864168 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-kubelet\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864197 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-cni-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864218 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-proxy-tls\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864243 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-ovn-kubernetes\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864265 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-daemon-config\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864292 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-netns\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864312 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-var-lib-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864337 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-etc-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864363 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-bin\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864386 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-config\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864408 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2223aac-784e-4653-8939-fcbd18c70ba7-cni-binary-copy\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864429 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-k8s-cni-cncf-io\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864458 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-kubelet\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864481 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-multus-certs\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864512 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whmh5\" (UniqueName: \"kubernetes.io/projected/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-kube-api-access-whmh5\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864878 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-system-cni-dir\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.864929 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-cnibin\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.865093 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-kubelet\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.864754 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:07:00.864684869 +0000 UTC m=+19.908903559 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.865377 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-cni-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.865447 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-cni-bin\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.865508 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-ovn-kubernetes\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.865545 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-conf-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.865713 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-os-release\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.866211 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-daemon-config\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.866411 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c8f39bcc-5a25-4746-988b-2251fd1be8c9-cni-binary-copy\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.866470 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-netns\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.866890 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d2223aac-784e-4653-8939-fcbd18c70ba7-cni-binary-copy\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.866921 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-config\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.866952 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-k8s-cni-cncf-io\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.866997 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-kubelet\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.866998 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-netns\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867023 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-var-lib-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867047 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-run-multus-certs\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867062 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-etc-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867091 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-rootfs\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867188 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867209 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867223 4580 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867270 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:00.867256975 +0000 UTC m=+19.911475665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867467 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d2223aac-784e-4653-8939-fcbd18c70ba7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867529 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-bin\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867550 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-log-socket\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867579 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-os-release\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867601 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-systemd\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867639 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-hostroot\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867670 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d2223aac-784e-4653-8939-fcbd18c70ba7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867677 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867697 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867710 4580 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867744 4580 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867760 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:00.867744152 +0000 UTC m=+19.911962843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867795 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:00.867778647 +0000 UTC m=+19.911997337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867798 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-ovn\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867827 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-cnibin\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867869 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-openvswitch\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867872 4580 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867911 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-multus-socket-dir-parent\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: E0112 13:06:59.867920 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:00.867911718 +0000 UTC m=+19.912130409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867951 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-host-var-lib-cni-multus\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867957 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-netd\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.867994 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-etc-kubernetes\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.868030 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.868058 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-systemd-units\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.868134 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c8f39bcc-5a25-4746-988b-2251fd1be8c9-system-cni-dir\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.868392 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-mcd-auth-proxy-config\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.868425 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-env-overrides\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.868448 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-slash\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.868474 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-node-log\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.868797 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-script-lib\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.872026 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.875648 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovn-node-metrics-cert\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.880866 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m82m\" (UniqueName: \"kubernetes.io/projected/c8f39bcc-5a25-4746-988b-2251fd1be8c9-kube-api-access-5m82m\") pod \"multus-nnz5s\" (UID: \"c8f39bcc-5a25-4746-988b-2251fd1be8c9\") " pod="openshift-multus/multus-nnz5s" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.881429 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whmh5\" (UniqueName: \"kubernetes.io/projected/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-kube-api-access-whmh5\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.881720 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.884523 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcrjr\" (UniqueName: \"kubernetes.io/projected/d2223aac-784e-4653-8939-fcbd18c70ba7-kube-api-access-hcrjr\") pod \"multus-additional-cni-plugins-2p6r8\" (UID: \"d2223aac-784e-4653-8939-fcbd18c70ba7\") " pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.885775 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wmg\" (UniqueName: \"kubernetes.io/projected/fd4e0810-eddb-47f5-a7dc-beed7b545112-kube-api-access-k4wmg\") pod \"ovnkube-node-hn77p\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:06:59 crc kubenswrapper[4580]: I0112 13:06:59.890197 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.023196 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nnz5s" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.032405 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.045411 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:07:00 crc kubenswrapper[4580]: W0112 13:07:00.129360 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f39bcc_5a25_4746_988b_2251fd1be8c9.slice/crio-2df17b74abcc9866094cae4bbaf52f6edd16404af6039156a653ab1aaeb537d7 WatchSource:0}: Error finding container 2df17b74abcc9866094cae4bbaf52f6edd16404af6039156a653ab1aaeb537d7: Status 404 returned error can't find the container with id 2df17b74abcc9866094cae4bbaf52f6edd16404af6039156a653ab1aaeb537d7 Jan 12 13:07:00 crc kubenswrapper[4580]: W0112 13:07:00.131445 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2223aac_784e_4653_8939_fcbd18c70ba7.slice/crio-9eea6ffc615665e9a39139bf7ff2c524495e89fc686bcc97dd8a5d6a0cbf3b9d WatchSource:0}: Error finding container 9eea6ffc615665e9a39139bf7ff2c524495e89fc686bcc97dd8a5d6a0cbf3b9d: Status 404 returned error can't find the container with id 9eea6ffc615665e9a39139bf7ff2c524495e89fc686bcc97dd8a5d6a0cbf3b9d Jan 12 13:07:00 crc kubenswrapper[4580]: W0112 13:07:00.132322 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd4e0810_eddb_47f5_a7dc_beed7b545112.slice/crio-8d0f642878b6350ced97adc24d17817f9f071e1ad17e703a8ea84dfd802b7dd8 WatchSource:0}: Error finding container 8d0f642878b6350ced97adc24d17817f9f071e1ad17e703a8ea84dfd802b7dd8: Status 404 returned error can't find the container with id 8d0f642878b6350ced97adc24d17817f9f071e1ad17e703a8ea84dfd802b7dd8 Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.281420 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.281551 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.377879 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b" exitCode=0 Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.377977 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.378072 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"8d0f642878b6350ced97adc24d17817f9f071e1ad17e703a8ea84dfd802b7dd8"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.379746 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerStarted","Data":"6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.379780 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerStarted","Data":"9eea6ffc615665e9a39139bf7ff2c524495e89fc686bcc97dd8a5d6a0cbf3b9d"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.387287 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nnz5s" event={"ID":"c8f39bcc-5a25-4746-988b-2251fd1be8c9","Type":"ContainerStarted","Data":"56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.387343 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nnz5s" event={"ID":"c8f39bcc-5a25-4746-988b-2251fd1be8c9","Type":"ContainerStarted","Data":"2df17b74abcc9866094cae4bbaf52f6edd16404af6039156a653ab1aaeb537d7"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.390443 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.390480 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"93c255ac7c03a72e970cea0fadc8dcafb7989822fe8e178635d13948467e5d0d"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.395503 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.397857 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8ch98" event={"ID":"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c","Type":"ContainerStarted","Data":"643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.397888 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8ch98" event={"ID":"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c","Type":"ContainerStarted","Data":"bf29be493cd99f967c321ced1b647726e8b9e89f1b05836c30ab83574502cd6f"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.403571 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1aca7dc49351605463b9ee72e9f8703d16cc216478f5e5ccd70d4d1daef1df85"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.409175 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.409303 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.409365 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"777508ab0df5473805ec9cfa5a443509e9ac9fcf72869005313e1247f4b87ae7"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.410023 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.414618 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.416929 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28"} Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.423474 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.439553 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.453204 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.462399 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.474252 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.486380 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.496146 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.509052 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.520334 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.530751 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.540372 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.548194 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.559469 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.569015 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.576864 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.590664 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.599912 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.609247 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.627168 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.643551 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.656862 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.673425 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:00Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.689415 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.701011 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aaecc77f-21ca-4f15-86e0-0dff03d2ab7b-proxy-tls\") pod \"machine-config-daemon-hdz6l\" (UID: \"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\") " pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.873301 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.873393 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873451 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:07:02.873426785 +0000 UTC m=+21.917645475 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873501 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873517 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873527 4580 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.873537 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873563 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:02.873551119 +0000 UTC m=+21.917769809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.873582 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.873605 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873604 4580 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873654 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:02.873645767 +0000 UTC m=+21.917864458 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873670 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873682 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873692 4580 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873691 4580 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873736 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:02.873722271 +0000 UTC m=+21.917940961 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:00 crc kubenswrapper[4580]: E0112 13:07:00.873751 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:02.873745395 +0000 UTC m=+21.917964086 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:00 crc kubenswrapper[4580]: I0112 13:07:00.918299 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:07:00 crc kubenswrapper[4580]: W0112 13:07:00.929765 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaecc77f_21ca_4f15_86e0_0dff03d2ab7b.slice/crio-4088f30d31fba9c8df1fa1f91357a35f7f163e848b24d60eeeca3e521ee2d416 WatchSource:0}: Error finding container 4088f30d31fba9c8df1fa1f91357a35f7f163e848b24d60eeeca3e521ee2d416: Status 404 returned error can't find the container with id 4088f30d31fba9c8df1fa1f91357a35f7f163e848b24d60eeeca3e521ee2d416 Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.161442 4580 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 12 13:07:01 crc kubenswrapper[4580]: W0112 13:07:01.162078 4580 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.234366 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.241931 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.244968 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.258357 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.271620 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.280969 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.281013 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:01 crc kubenswrapper[4580]: E0112 13:07:01.281068 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:01 crc kubenswrapper[4580]: E0112 13:07:01.281166 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.284211 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.284943 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.285446 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.293606 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.301358 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.311761 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.320947 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.329563 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.338710 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.348357 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.358383 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.368946 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.383952 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.396977 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.407167 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.423291 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.423352 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.423368 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"4088f30d31fba9c8df1fa1f91357a35f7f163e848b24d60eeeca3e521ee2d416"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.423764 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.427138 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.427166 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.427180 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.427191 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.427203 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.427215 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.431809 4580 generic.go:334] "Generic (PLEG): container finished" podID="d2223aac-784e-4653-8939-fcbd18c70ba7" containerID="6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18" exitCode=0 Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.431900 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerDied","Data":"6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.432954 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.433967 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.445527 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.467793 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.506945 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.546087 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.587236 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.624285 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.645718 4580 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.648704 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.648731 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.648740 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.648891 4580 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.664194 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.718555 4580 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.718710 4580 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.719541 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.719561 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.719569 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.719579 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.719588 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:01Z","lastTransitionTime":"2026-01-12T13:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:01 crc kubenswrapper[4580]: E0112 13:07:01.733391 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.737716 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.737772 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.737785 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.737807 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.737822 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:01Z","lastTransitionTime":"2026-01-12T13:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.744625 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: E0112 13:07:01.747078 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.750581 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.750635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.750648 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.750665 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.750675 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:01Z","lastTransitionTime":"2026-01-12T13:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:01 crc kubenswrapper[4580]: E0112 13:07:01.759949 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.762747 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.762788 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.762798 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.762813 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.762829 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:01Z","lastTransitionTime":"2026-01-12T13:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:01 crc kubenswrapper[4580]: E0112 13:07:01.774351 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.777015 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.777053 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.777066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.777085 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.777095 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:01Z","lastTransitionTime":"2026-01-12T13:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.784715 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: E0112 13:07:01.788571 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: E0112 13:07:01.788686 4580 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.789958 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.789987 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.789997 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.790011 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.790028 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:01Z","lastTransitionTime":"2026-01-12T13:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.825439 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.864988 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.892066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.892130 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.892144 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.892159 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.892170 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:01Z","lastTransitionTime":"2026-01-12T13:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.904870 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.952305 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.993952 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.993981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.993990 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.994004 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:01 crc kubenswrapper[4580]: I0112 13:07:01.994012 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:01Z","lastTransitionTime":"2026-01-12T13:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.007428 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.027264 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.066996 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.096083 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.096126 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.096135 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.096150 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.096158 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.108935 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.144162 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.184077 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.198803 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.198828 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.198838 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.198853 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.198863 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.226211 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.280946 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.281055 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.300964 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.300994 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.301003 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.301016 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.301027 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.403477 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.403534 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.403544 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.403559 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.403570 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.438754 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.441867 4580 generic.go:334] "Generic (PLEG): container finished" podID="d2223aac-784e-4653-8939-fcbd18c70ba7" containerID="ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997" exitCode=0 Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.441923 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerDied","Data":"ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.452371 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.466147 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.484580 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.495818 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.505812 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.505842 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.505852 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.505869 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.505878 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.505972 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.515525 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.526453 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.544277 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.585798 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.607560 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.607643 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.607663 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.607678 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.607688 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.624559 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.664327 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.705303 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.710148 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.710190 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.710201 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.710218 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.710231 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.742903 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.782955 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.812752 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.812784 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.812795 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.812813 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.812823 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.825133 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.866149 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.890374 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.890480 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.890517 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890545 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:07:06.890521205 +0000 UTC m=+25.934739895 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.890583 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890610 4580 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.890627 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890657 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:06.890645981 +0000 UTC m=+25.934864670 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890757 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890771 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890784 4580 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890813 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:06.890805601 +0000 UTC m=+25.935024290 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890845 4580 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890865 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:06.890859102 +0000 UTC m=+25.935077792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890920 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890934 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890946 4580 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:02 crc kubenswrapper[4580]: E0112 13:07:02.890970 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:06.890963318 +0000 UTC m=+25.935182008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.908059 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.914214 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.914242 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.914252 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.914272 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.914281 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:02Z","lastTransitionTime":"2026-01-12T13:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.945831 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:02 crc kubenswrapper[4580]: I0112 13:07:02.985151 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:02Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.017500 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.018517 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.018535 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.018556 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.018570 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.025070 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.066398 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.105770 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.120404 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.120440 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.120450 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.120475 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.120485 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.145402 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.184007 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.222548 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.222580 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.222589 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.222601 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.222609 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.226026 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.264857 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.281283 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.281338 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:03 crc kubenswrapper[4580]: E0112 13:07:03.281416 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:03 crc kubenswrapper[4580]: E0112 13:07:03.281528 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.324011 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.324043 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.324052 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.324065 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.324074 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.426150 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.426185 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.426195 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.426209 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.426220 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.449545 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.451267 4580 generic.go:334] "Generic (PLEG): container finished" podID="d2223aac-784e-4653-8939-fcbd18c70ba7" containerID="ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416" exitCode=0 Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.451304 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerDied","Data":"ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.465170 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.478828 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.487528 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.496884 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.504540 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.514309 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.528098 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.528149 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.528158 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.528172 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.528182 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.546895 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.585312 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.627658 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.630477 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.630543 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.630559 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.630580 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.630596 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.631318 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.640704 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.674263 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.688183 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.723213 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.732923 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.733012 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.733091 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.733181 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.733238 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.764872 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.804558 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.835780 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.835817 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.835826 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.835841 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.835851 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.844838 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.882948 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.922482 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.938269 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.938359 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.938428 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.938503 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.938586 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:03Z","lastTransitionTime":"2026-01-12T13:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:03 crc kubenswrapper[4580]: I0112 13:07:03.963019 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.002865 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.041260 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.041288 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.041296 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.041310 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.041320 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.054309 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.086368 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.124935 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.143373 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.143422 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.143434 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.143456 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.143466 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.158944 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.182992 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.226832 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.245660 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.245695 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.245704 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.245718 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.245727 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.269197 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.281289 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:04 crc kubenswrapper[4580]: E0112 13:07:04.281384 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.305012 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.345471 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.347983 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.348021 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.348034 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.348055 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.348070 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.383723 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.450366 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.450403 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.450412 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.450429 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.450444 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.456332 4580 generic.go:334] "Generic (PLEG): container finished" podID="d2223aac-784e-4653-8939-fcbd18c70ba7" containerID="88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27" exitCode=0 Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.457039 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerDied","Data":"88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27"} Jan 12 13:07:04 crc kubenswrapper[4580]: E0112 13:07:04.462229 4580 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.466966 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.487575 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.526174 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.551935 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.551983 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.551995 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.552018 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.552030 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.569201 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.604366 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.644717 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.654335 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.654372 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.654385 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.654405 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.654423 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.686823 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.724581 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.757302 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.757364 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.757375 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.757424 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.757436 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.763622 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.804114 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.844928 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.860030 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.860068 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.860081 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.860096 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.860128 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.884137 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.926128 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.962842 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.962880 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.962890 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.962908 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.962918 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:04Z","lastTransitionTime":"2026-01-12T13:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:04 crc kubenswrapper[4580]: I0112 13:07:04.968084 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:04Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.064832 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.065195 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.065206 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.065226 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.065239 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.127569 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-thp2h"] Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.128057 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.129676 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.129768 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.131534 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.133817 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.133917 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.136959 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.144019 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.153666 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.164657 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.167019 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.167052 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.167062 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.167074 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.167084 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.205669 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.211961 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0adac83c-1303-404f-85a1-c7b477da2226-host\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.212027 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0adac83c-1303-404f-85a1-c7b477da2226-serviceca\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.212054 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfhs8\" (UniqueName: \"kubernetes.io/projected/0adac83c-1303-404f-85a1-c7b477da2226-kube-api-access-cfhs8\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.243749 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.268723 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.268764 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.268774 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.268790 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.268802 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.281177 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.281193 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:05 crc kubenswrapper[4580]: E0112 13:07:05.281261 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:05 crc kubenswrapper[4580]: E0112 13:07:05.281316 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.283449 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.312654 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfhs8\" (UniqueName: \"kubernetes.io/projected/0adac83c-1303-404f-85a1-c7b477da2226-kube-api-access-cfhs8\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.312692 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0adac83c-1303-404f-85a1-c7b477da2226-host\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.312742 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0adac83c-1303-404f-85a1-c7b477da2226-serviceca\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.312826 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0adac83c-1303-404f-85a1-c7b477da2226-host\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.313876 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0adac83c-1303-404f-85a1-c7b477da2226-serviceca\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.323773 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.349948 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfhs8\" (UniqueName: \"kubernetes.io/projected/0adac83c-1303-404f-85a1-c7b477da2226-kube-api-access-cfhs8\") pod \"node-ca-thp2h\" (UID: \"0adac83c-1303-404f-85a1-c7b477da2226\") " pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.370812 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.370843 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.370852 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.370866 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.370875 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.384379 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.425402 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.437862 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-thp2h" Jan 12 13:07:05 crc kubenswrapper[4580]: W0112 13:07:05.447501 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0adac83c_1303_404f_85a1_c7b477da2226.slice/crio-b56457c7f0ed84eff8640d8dfe5df2abeb3587de2ed7292e01a48d8df3d1d01a WatchSource:0}: Error finding container b56457c7f0ed84eff8640d8dfe5df2abeb3587de2ed7292e01a48d8df3d1d01a: Status 404 returned error can't find the container with id b56457c7f0ed84eff8640d8dfe5df2abeb3587de2ed7292e01a48d8df3d1d01a Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.462072 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.462308 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.465209 4580 generic.go:334] "Generic (PLEG): container finished" podID="d2223aac-784e-4653-8939-fcbd18c70ba7" containerID="7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60" exitCode=0 Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.465255 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerDied","Data":"7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.467058 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-thp2h" event={"ID":"0adac83c-1303-404f-85a1-c7b477da2226","Type":"ContainerStarted","Data":"b56457c7f0ed84eff8640d8dfe5df2abeb3587de2ed7292e01a48d8df3d1d01a"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.467356 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.473938 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.473979 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.474001 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.474016 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.474026 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.482751 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.506850 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.542893 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.576369 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.576395 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.576403 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.576418 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.576427 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.584452 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.622563 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.665973 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.678441 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.678468 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.678477 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.678589 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.678611 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.704696 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.743490 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.781118 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.781141 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.781151 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.781163 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.781171 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.784137 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.827994 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.865179 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.883574 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.883611 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.883624 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.883640 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.883651 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.903608 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.942830 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.983819 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:05Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.984782 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.984813 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.984824 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.984839 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:05 crc kubenswrapper[4580]: I0112 13:07:05.984848 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:05Z","lastTransitionTime":"2026-01-12T13:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.027749 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.062365 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.086467 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.086496 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.086506 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.086528 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.086539 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.104342 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.144220 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.182721 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.187950 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.187978 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.187988 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.188001 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.188012 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.225664 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.263015 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.280617 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.280708 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.289187 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.289217 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.289226 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.289240 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.289250 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.391601 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.391630 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.391640 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.391651 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.391659 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.472973 4580 generic.go:334] "Generic (PLEG): container finished" podID="d2223aac-784e-4653-8939-fcbd18c70ba7" containerID="7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599" exitCode=0 Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.473050 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerDied","Data":"7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.482070 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-thp2h" event={"ID":"0adac83c-1303-404f-85a1-c7b477da2226","Type":"ContainerStarted","Data":"0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.482127 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.482207 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.493682 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.493705 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.493715 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.493726 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.493735 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.495860 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.507839 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.508499 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.525381 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.539533 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.551240 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.559461 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.568058 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.584275 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.596378 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.596406 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.596416 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.596430 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.596441 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.624643 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.664047 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.698310 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.698355 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.698371 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.698394 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.698405 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.705883 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.747844 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.784345 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.801091 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.801139 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.801150 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.801166 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.801178 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.823780 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.865921 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.903029 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.903062 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.903072 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.903087 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.903097 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:06Z","lastTransitionTime":"2026-01-12T13:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.906358 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.924225 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.924366 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924401 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:07:14.924373158 +0000 UTC m=+33.968591848 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.924443 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.924479 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.924527 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924514 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924604 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924604 4580 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924619 4580 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924620 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924639 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924578 4580 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924651 4580 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924655 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:14.924643136 +0000 UTC m=+33.968861826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924699 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:14.924689322 +0000 UTC m=+33.968908013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924711 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:14.924704933 +0000 UTC m=+33.968923622 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:06 crc kubenswrapper[4580]: E0112 13:07:06.924724 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:14.924718147 +0000 UTC m=+33.968936837 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.944570 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:06 crc kubenswrapper[4580]: I0112 13:07:06.985461 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:06Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.005593 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.005631 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.005643 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.005661 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.005674 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.029415 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.064387 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.104237 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.107399 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.107432 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.107442 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.107455 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.107466 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.144190 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.184414 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.209455 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.209485 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.209494 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.209509 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.209518 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.222572 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.263141 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.281440 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:07 crc kubenswrapper[4580]: E0112 13:07:07.281548 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.281632 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:07 crc kubenswrapper[4580]: E0112 13:07:07.281779 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.304055 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.311288 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.311325 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.311336 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.311350 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.311360 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.343918 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.383445 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.413140 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.413181 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.413191 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.413207 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.413217 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.424207 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.467418 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.486060 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/0.log" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.489050 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515" exitCode=1 Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.489140 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.489653 4580 scope.go:117] "RemoveContainer" containerID="f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.493013 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" event={"ID":"d2223aac-784e-4653-8939-fcbd18c70ba7","Type":"ContainerStarted","Data":"81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.503048 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.515281 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.515311 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.515320 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.515334 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.515344 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.543596 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.585226 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.617181 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.617212 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.617221 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.617236 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.617245 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.626670 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.669172 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.704916 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.720071 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.720124 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.720134 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.720152 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.720161 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.749035 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.786236 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.822595 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.822634 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.822644 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.822659 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.822669 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.825042 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.867964 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"message\\\":\\\"where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0112 13:07:06.941890 5796 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0112 13:07:06.941775 5796 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0112 13:07:06.942021 5796 factory.go:656] Stopping watch factory\\\\nI0112 13:07:06.942022 5796 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0112 13:07:06.942062 5796 ovnkube.go:599] Stopped ovnkube\\\\nI0112 13:07:06.942071 5796 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0112 13:07:06.942118 5796 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0112 13:07:06.942248 5796 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.903671 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.925361 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.925606 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.925616 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.925632 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.925642 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:07Z","lastTransitionTime":"2026-01-12T13:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.945182 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:07 crc kubenswrapper[4580]: I0112 13:07:07.985570 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:07Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.023736 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.027585 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.027625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.027635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.027651 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.027660 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.064017 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.104857 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.129975 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.130003 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.130015 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.130029 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.130040 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.145626 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.183593 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.224514 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.231829 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.231872 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.231882 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.231896 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.231906 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.263461 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.280787 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:08 crc kubenswrapper[4580]: E0112 13:07:08.280878 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.304241 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.333678 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.333706 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.333715 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.333731 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.333740 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.349587 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.391816 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.423710 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.436486 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.436522 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.436539 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.436554 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.436563 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.465078 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.497552 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/1.log" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.498082 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/0.log" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.500319 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575" exitCode=1 Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.500356 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.500412 4580 scope.go:117] "RemoveContainer" containerID="f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.501024 4580 scope.go:117] "RemoveContainer" containerID="68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575" Jan 12 13:07:08 crc kubenswrapper[4580]: E0112 13:07:08.501237 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.508167 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.537878 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.537923 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.537932 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.537946 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.537957 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.547718 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"message\\\":\\\"where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0112 13:07:06.941890 5796 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0112 13:07:06.941775 5796 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0112 13:07:06.942021 5796 factory.go:656] Stopping watch factory\\\\nI0112 13:07:06.942022 5796 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0112 13:07:06.942062 5796 ovnkube.go:599] Stopped ovnkube\\\\nI0112 13:07:06.942071 5796 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0112 13:07:06.942118 5796 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0112 13:07:06.942248 5796 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.582721 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.626047 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.639283 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.639309 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.639336 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.639349 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.639358 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.664059 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.703794 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.741168 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.741209 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.741219 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.741233 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.741242 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.743084 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.784973 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.824082 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.843098 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.843136 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.843145 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.843158 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.843166 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.863832 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.903626 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.944719 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.945337 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.945367 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.945377 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.945389 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.945397 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:08Z","lastTransitionTime":"2026-01-12T13:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:08 crc kubenswrapper[4580]: I0112 13:07:08.987944 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.022251 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.047551 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.047897 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.047907 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.047919 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.047928 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.063335 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.104456 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.143863 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.150047 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.150075 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.150087 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.150099 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.150122 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.187450 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7d21777404eb2fca4761fa8c2ddf26398b56de55ee8361a3e37d8fb3fd8c515\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"message\\\":\\\"where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0112 13:07:06.941890 5796 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0112 13:07:06.941775 5796 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0112 13:07:06.942021 5796 factory.go:656] Stopping watch factory\\\\nI0112 13:07:06.942022 5796 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0112 13:07:06.942062 5796 ovnkube.go:599] Stopped ovnkube\\\\nI0112 13:07:06.942071 5796 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0112 13:07:06.942118 5796 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0112 13:07:06.942248 5796 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.222701 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.251935 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.251965 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.251974 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.251987 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.251996 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.265003 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.281287 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.281346 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:09 crc kubenswrapper[4580]: E0112 13:07:09.281390 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:09 crc kubenswrapper[4580]: E0112 13:07:09.281437 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.353926 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.353973 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.353981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.353995 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.354004 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.455991 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.456027 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.456036 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.456050 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.456058 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.504210 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/1.log" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.506646 4580 scope.go:117] "RemoveContainer" containerID="68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575" Jan 12 13:07:09 crc kubenswrapper[4580]: E0112 13:07:09.506774 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.516914 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.531236 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.537901 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.548478 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.557198 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.558071 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.558128 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.558137 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.558148 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.558159 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.566018 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.576627 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.585116 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.623810 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.663116 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.663166 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.663189 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.663226 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.663257 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.665997 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.704768 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.748408 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.765765 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.765791 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.765801 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.765816 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.765825 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.784091 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.823065 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.862643 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:09Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.867928 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.867960 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.867969 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.867984 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.867995 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.969674 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.969774 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.969839 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.969921 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:09 crc kubenswrapper[4580]: I0112 13:07:09.969974 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:09Z","lastTransitionTime":"2026-01-12T13:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.071941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.071972 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.071981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.072000 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.072009 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.173958 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.174050 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.174122 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.174187 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.174246 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.276598 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.276981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.277040 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.277121 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.277176 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.280908 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:10 crc kubenswrapper[4580]: E0112 13:07:10.281015 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.378439 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.378474 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.378482 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.378493 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.378502 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.480190 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.480221 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.480229 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.480240 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.480250 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.582378 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.582413 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.582422 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.582434 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.582447 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.684041 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.684059 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.684066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.684076 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.684082 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.786398 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.786425 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.786435 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.786448 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.786460 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.887948 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.887973 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.887981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.887992 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.888000 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.989958 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.989999 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.990007 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.990028 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:10 crc kubenswrapper[4580]: I0112 13:07:10.990037 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:10Z","lastTransitionTime":"2026-01-12T13:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.091375 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.091492 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.091572 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.091644 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.091710 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.193550 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.193608 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.193642 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.193661 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.193674 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.281024 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.281048 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:11 crc kubenswrapper[4580]: E0112 13:07:11.281163 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:11 crc kubenswrapper[4580]: E0112 13:07:11.281312 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.295200 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.295232 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.295251 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.295265 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.295275 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.297576 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.306497 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.319928 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.329069 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.337367 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.344727 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.352790 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.362090 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.370318 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.377656 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.385491 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.396470 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.396494 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.396507 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.396519 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.396529 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.398340 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.406279 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.413738 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.424348 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.497886 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.497928 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.497941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.497960 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.497973 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.524798 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr"] Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.525323 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.526782 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.528420 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.534995 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.547614 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.557250 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.561860 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsss4\" (UniqueName: \"kubernetes.io/projected/61051313-b754-4528-ade6-ffacbebafb8e-kube-api-access-fsss4\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.561932 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61051313-b754-4528-ade6-ffacbebafb8e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.561965 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61051313-b754-4528-ade6-ffacbebafb8e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.561987 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61051313-b754-4528-ade6-ffacbebafb8e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.566221 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.574167 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.580897 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.592680 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.599920 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.599969 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.600001 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.600016 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.600034 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.603891 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.613770 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.622006 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.630360 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.638997 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.645773 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.653084 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.660093 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.662341 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61051313-b754-4528-ade6-ffacbebafb8e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.662375 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61051313-b754-4528-ade6-ffacbebafb8e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.662416 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61051313-b754-4528-ade6-ffacbebafb8e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.662444 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsss4\" (UniqueName: \"kubernetes.io/projected/61051313-b754-4528-ade6-ffacbebafb8e-kube-api-access-fsss4\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.663087 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61051313-b754-4528-ade6-ffacbebafb8e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.663237 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61051313-b754-4528-ade6-ffacbebafb8e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.669817 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61051313-b754-4528-ade6-ffacbebafb8e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.670803 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.675347 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsss4\" (UniqueName: \"kubernetes.io/projected/61051313-b754-4528-ade6-ffacbebafb8e-kube-api-access-fsss4\") pod \"ovnkube-control-plane-749d76644c-vmmdr\" (UID: \"61051313-b754-4528-ade6-ffacbebafb8e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.702635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.702737 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.702797 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.702865 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.702919 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.804836 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.804895 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.804905 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.804918 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.804926 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.835320 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" Jan 12 13:07:11 crc kubenswrapper[4580]: W0112 13:07:11.845422 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61051313_b754_4528_ade6_ffacbebafb8e.slice/crio-c1eadb829d2c5d5378e067330fd8eb081b1e070b617024118f8482da7f02febb WatchSource:0}: Error finding container c1eadb829d2c5d5378e067330fd8eb081b1e070b617024118f8482da7f02febb: Status 404 returned error can't find the container with id c1eadb829d2c5d5378e067330fd8eb081b1e070b617024118f8482da7f02febb Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.866401 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.866430 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.866442 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.866458 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.866470 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: E0112 13:07:11.874890 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.877387 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.877407 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.877471 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.877485 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.877492 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: E0112 13:07:11.885346 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.887770 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.887801 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.887812 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.887827 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.887838 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: E0112 13:07:11.896157 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.898280 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.898311 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.898321 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.898334 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.898344 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: E0112 13:07:11.905890 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.908194 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.908224 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.908235 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.908247 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.908254 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:11 crc kubenswrapper[4580]: E0112 13:07:11.916026 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:11 crc kubenswrapper[4580]: E0112 13:07:11.916153 4580 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.917150 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.917176 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.917186 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.917199 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:11 crc kubenswrapper[4580]: I0112 13:07:11.917208 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:11Z","lastTransitionTime":"2026-01-12T13:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.019219 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.019259 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.019269 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.019284 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.019293 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.122172 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.122211 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.122225 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.122244 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.122258 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.224454 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.224497 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.224507 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.224522 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.224532 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.281204 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:12 crc kubenswrapper[4580]: E0112 13:07:12.281403 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.326753 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.326792 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.326804 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.326822 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.326835 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.429586 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.429627 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.429641 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.429654 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.429667 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.517432 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" event={"ID":"61051313-b754-4528-ade6-ffacbebafb8e","Type":"ContainerStarted","Data":"14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.517484 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" event={"ID":"61051313-b754-4528-ade6-ffacbebafb8e","Type":"ContainerStarted","Data":"a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.517497 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" event={"ID":"61051313-b754-4528-ade6-ffacbebafb8e","Type":"ContainerStarted","Data":"c1eadb829d2c5d5378e067330fd8eb081b1e070b617024118f8482da7f02febb"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.525901 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.531673 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.531729 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.531741 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.531753 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.531764 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.533140 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.543258 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.551428 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.559006 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.567516 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.579952 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.588809 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.597328 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.606669 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.613708 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.626324 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.633642 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.633695 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.633706 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.633723 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.633736 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.636817 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.645519 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.653486 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.661307 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.735353 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.735379 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.735388 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.735397 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.735406 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.837518 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.837580 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.837592 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.837611 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.837624 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.940066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.940117 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.940131 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.940147 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.940156 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:12Z","lastTransitionTime":"2026-01-12T13:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.959857 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jw27h"] Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.960545 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:12 crc kubenswrapper[4580]: E0112 13:07:12.960634 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.968683 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.975888 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.985035 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:12 crc kubenswrapper[4580]: I0112 13:07:12.993405 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:12Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.001036 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.008738 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.020967 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.027884 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.035204 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.042437 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.042470 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.042480 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.042495 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.042509 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.046485 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.056749 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.066067 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.075391 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbqm4\" (UniqueName: \"kubernetes.io/projected/5066d8fa-2cee-4764-a817-b819d3876638-kube-api-access-fbqm4\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.075463 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.080398 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.089333 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.098293 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.107355 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.115565 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.144944 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.144979 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.144989 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.145004 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.145015 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.176093 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.176157 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbqm4\" (UniqueName: \"kubernetes.io/projected/5066d8fa-2cee-4764-a817-b819d3876638-kube-api-access-fbqm4\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:13 crc kubenswrapper[4580]: E0112 13:07:13.176287 4580 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:13 crc kubenswrapper[4580]: E0112 13:07:13.176354 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs podName:5066d8fa-2cee-4764-a817-b819d3876638 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:13.676336365 +0000 UTC m=+32.720555065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs") pod "network-metrics-daemon-jw27h" (UID: "5066d8fa-2cee-4764-a817-b819d3876638") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.190235 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbqm4\" (UniqueName: \"kubernetes.io/projected/5066d8fa-2cee-4764-a817-b819d3876638-kube-api-access-fbqm4\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.247469 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.247534 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.247565 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.247577 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.247588 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.280711 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.280791 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:13 crc kubenswrapper[4580]: E0112 13:07:13.280855 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:13 crc kubenswrapper[4580]: E0112 13:07:13.280928 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.348900 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.349261 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.349331 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.349403 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.349467 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.451249 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.451280 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.451290 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.451304 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.451313 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.553047 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.553077 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.553086 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.553099 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.553126 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.656988 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.657019 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.657028 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.657042 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.657053 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.680717 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:13 crc kubenswrapper[4580]: E0112 13:07:13.680864 4580 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:13 crc kubenswrapper[4580]: E0112 13:07:13.680915 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs podName:5066d8fa-2cee-4764-a817-b819d3876638 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:14.680900271 +0000 UTC m=+33.725118962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs") pod "network-metrics-daemon-jw27h" (UID: "5066d8fa-2cee-4764-a817-b819d3876638") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.755065 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.758670 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.758701 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.758711 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.758725 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.758734 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.771544 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.782816 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.791898 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.800590 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.816349 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.831987 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.849017 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.860088 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.861443 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.861472 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.861481 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.861494 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.861503 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.870779 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.880249 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.889292 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.899774 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.913167 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.922191 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.930044 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.944699 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.953266 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:13Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.963970 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.963998 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.964007 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.964021 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:13 crc kubenswrapper[4580]: I0112 13:07:13.964029 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:13Z","lastTransitionTime":"2026-01-12T13:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.066897 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.067040 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.067133 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.067206 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.067265 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.169341 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.169380 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.169389 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.169404 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.169415 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.271747 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.271801 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.271814 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.271836 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.271853 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.281396 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.281407 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.281613 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.281504 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.374122 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.374154 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.374165 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.374178 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.374191 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.476576 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.476648 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.476662 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.476683 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.476694 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.578445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.578545 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.578639 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.578707 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.578768 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.681933 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.682401 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.682418 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.682662 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.682696 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.687515 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.687661 4580 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.687732 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs podName:5066d8fa-2cee-4764-a817-b819d3876638 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:16.687713153 +0000 UTC m=+35.731931853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs") pod "network-metrics-daemon-jw27h" (UID: "5066d8fa-2cee-4764-a817-b819d3876638") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.787608 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.787641 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.787652 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.787664 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.787674 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.889274 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.889309 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.889319 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.889334 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.889344 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.989672 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.989887 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:07:30.989866356 +0000 UTC m=+50.034085056 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.990098 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.990170 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.990220 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990268 4580 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.990290 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990330 4580 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990346 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:30.990327485 +0000 UTC m=+50.034546176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990378 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:30.990362522 +0000 UTC m=+50.034581222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990434 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990452 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990467 4580 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990508 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:30.990498257 +0000 UTC m=+50.034716947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990527 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990554 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990584 4580 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:14 crc kubenswrapper[4580]: E0112 13:07:14.990664 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:30.990642518 +0000 UTC m=+50.034861208 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.991589 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.991616 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.991625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.991636 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:14 crc kubenswrapper[4580]: I0112 13:07:14.991644 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:14Z","lastTransitionTime":"2026-01-12T13:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.093862 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.093891 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.093899 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.093912 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.093920 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.197419 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.197458 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.197469 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.197487 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.197497 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.281659 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.281677 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:15 crc kubenswrapper[4580]: E0112 13:07:15.281956 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:15 crc kubenswrapper[4580]: E0112 13:07:15.282000 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.298864 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.298892 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.298902 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.298914 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.298924 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.400704 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.400743 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.400755 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.400768 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.400781 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.502364 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.502402 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.502412 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.502426 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.502435 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.604758 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.604818 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.604827 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.604839 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.604851 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.706997 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.707054 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.707065 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.707092 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.707138 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.808762 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.808788 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.808796 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.808808 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.808818 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.910427 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.910465 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.910475 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.910488 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:15 crc kubenswrapper[4580]: I0112 13:07:15.910497 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:15Z","lastTransitionTime":"2026-01-12T13:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.012450 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.012490 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.012499 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.012512 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.012526 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.114500 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.114534 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.114544 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.114556 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.114573 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.216603 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.216638 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.216648 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.216660 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.216669 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.281335 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.281366 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:16 crc kubenswrapper[4580]: E0112 13:07:16.281474 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:16 crc kubenswrapper[4580]: E0112 13:07:16.281619 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.318416 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.318540 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.318645 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.318728 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.318791 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.420950 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.420976 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.420984 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.420998 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.421011 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.522789 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.522817 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.522827 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.522838 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.522846 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.624062 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.624099 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.624131 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.624146 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.624156 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.703445 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:16 crc kubenswrapper[4580]: E0112 13:07:16.703645 4580 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:16 crc kubenswrapper[4580]: E0112 13:07:16.703722 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs podName:5066d8fa-2cee-4764-a817-b819d3876638 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:20.703698972 +0000 UTC m=+39.747917653 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs") pod "network-metrics-daemon-jw27h" (UID: "5066d8fa-2cee-4764-a817-b819d3876638") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.725879 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.725917 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.725931 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.725951 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.725960 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.827133 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.827164 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.827175 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.827186 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.827200 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.929175 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.929214 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.929225 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.929238 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:16 crc kubenswrapper[4580]: I0112 13:07:16.929247 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:16Z","lastTransitionTime":"2026-01-12T13:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.031690 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.031726 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.031736 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.031752 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.031764 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.133895 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.133927 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.133936 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.133953 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.133963 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.236056 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.236143 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.236160 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.236182 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.236197 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.281326 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.281334 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:17 crc kubenswrapper[4580]: E0112 13:07:17.281430 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:17 crc kubenswrapper[4580]: E0112 13:07:17.281552 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.338286 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.338318 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.338330 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.338343 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.338353 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.439804 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.439836 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.439845 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.439855 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.439866 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.541453 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.541484 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.541493 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.541508 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.541517 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.643189 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.643234 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.643246 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.643263 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.643276 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.744852 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.744891 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.744900 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.744915 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.744926 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.846599 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.846636 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.846645 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.846656 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.846666 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.947820 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.947850 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.947859 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.947870 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:17 crc kubenswrapper[4580]: I0112 13:07:17.947878 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:17Z","lastTransitionTime":"2026-01-12T13:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.049794 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.049830 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.049838 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.049849 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.049857 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.151433 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.151463 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.151472 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.151509 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.151519 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.252583 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.252609 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.252617 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.252627 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.252636 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.281225 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.281234 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:18 crc kubenswrapper[4580]: E0112 13:07:18.281325 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:18 crc kubenswrapper[4580]: E0112 13:07:18.281415 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.354019 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.354048 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.354057 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.354068 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.354076 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.455871 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.455901 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.455911 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.455929 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.455939 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.557340 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.557373 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.557384 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.557396 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.557405 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.658925 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.658947 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.658956 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.658966 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.658973 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.761093 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.761178 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.761189 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.761212 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.761227 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.863920 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.863952 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.863963 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.863976 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.863988 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.965791 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.965814 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.965824 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.965834 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:18 crc kubenswrapper[4580]: I0112 13:07:18.965842 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:18Z","lastTransitionTime":"2026-01-12T13:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.066999 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.067030 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.067040 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.067051 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.067060 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.169331 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.169353 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.169362 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.169373 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.169382 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.270850 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.270871 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.270882 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.270893 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.270902 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.281457 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.281520 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:19 crc kubenswrapper[4580]: E0112 13:07:19.281622 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:19 crc kubenswrapper[4580]: E0112 13:07:19.281815 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.373410 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.373466 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.373480 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.373503 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.373515 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.476359 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.476391 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.476401 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.476413 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.476425 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.578740 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.578779 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.578788 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.578829 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.578840 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.680708 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.680750 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.680763 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.680783 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.680794 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.783005 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.783039 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.783051 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.783063 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.783072 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.884875 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.884904 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.884912 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.884928 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.884938 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.987401 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.987439 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.987451 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.987467 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:19 crc kubenswrapper[4580]: I0112 13:07:19.987477 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:19Z","lastTransitionTime":"2026-01-12T13:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.089607 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.089648 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.089657 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.089669 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.089676 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.192608 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.192645 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.192656 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.192669 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.192679 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.281049 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.281167 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:20 crc kubenswrapper[4580]: E0112 13:07:20.281212 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:20 crc kubenswrapper[4580]: E0112 13:07:20.281310 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.295269 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.295304 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.295314 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.295327 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.295340 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.397333 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.397370 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.397382 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.397398 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.397409 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.499633 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.499681 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.499693 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.499715 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.499727 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.601155 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.601192 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.601202 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.601213 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.601222 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.702782 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.702816 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.702833 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.702850 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.702860 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.732795 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:20 crc kubenswrapper[4580]: E0112 13:07:20.732944 4580 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:20 crc kubenswrapper[4580]: E0112 13:07:20.732997 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs podName:5066d8fa-2cee-4764-a817-b819d3876638 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:28.732979355 +0000 UTC m=+47.777198035 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs") pod "network-metrics-daemon-jw27h" (UID: "5066d8fa-2cee-4764-a817-b819d3876638") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.804558 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.804607 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.804620 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.804633 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.804643 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.906566 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.906620 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.906632 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.906647 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:20 crc kubenswrapper[4580]: I0112 13:07:20.906658 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:20Z","lastTransitionTime":"2026-01-12T13:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.008748 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.008785 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.008798 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.008811 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.008821 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.110889 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.110941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.110960 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.110986 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.111004 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.213030 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.213068 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.213078 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.213092 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.213115 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.281331 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.281427 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:21 crc kubenswrapper[4580]: E0112 13:07:21.281466 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:21 crc kubenswrapper[4580]: E0112 13:07:21.281887 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.281963 4580 scope.go:117] "RemoveContainer" containerID="68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.290889 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.298305 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.309433 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.315237 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.315333 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.315397 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.315457 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.315514 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.319619 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.334433 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.343917 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.354944 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.364291 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.374332 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.381969 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.389380 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.399835 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.411238 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.417756 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.417782 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.417794 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.417810 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.417824 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.420793 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.430281 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.442918 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.449849 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.519916 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.520249 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.520262 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.520277 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.520289 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.553610 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/1.log" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.556434 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.556855 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.569384 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.579745 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.589448 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.609314 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.622363 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.622395 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.622411 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.622429 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.622442 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.623671 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.634990 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.658033 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.672597 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.680140 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.691056 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.699023 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.706873 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.716742 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.724097 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.724158 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.724169 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.724188 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.724199 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.726226 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.735469 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.748596 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.757706 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.826670 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.826707 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.826717 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.826733 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.826745 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.928617 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.928657 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.928668 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.928683 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.928693 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.997999 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.998052 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.998065 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.998089 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:21 crc kubenswrapper[4580]: I0112 13:07:21.998119 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:21Z","lastTransitionTime":"2026-01-12T13:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.008619 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.011866 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.011922 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.011934 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.011959 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.011972 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.021278 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.024635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.024674 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.024685 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.024702 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.024715 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.032988 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.035959 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.036006 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.036017 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.036033 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.036043 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.044880 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.048082 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.048135 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.048147 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.048160 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.048172 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.057241 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.057351 4580 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.058586 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.058625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.058635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.058650 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.058659 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.160719 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.160782 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.160795 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.160806 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.160816 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.262807 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.262832 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.262842 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.262854 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.262864 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.281653 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.281756 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.281987 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.282042 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.364971 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.365015 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.365025 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.365041 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.365053 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.467305 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.467353 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.467364 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.467381 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.467392 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.561360 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/2.log" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.562316 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/1.log" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.564984 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645" exitCode=1 Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.565046 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.565123 4580 scope.go:117] "RemoveContainer" containerID="68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.565703 4580 scope.go:117] "RemoveContainer" containerID="4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645" Jan 12 13:07:22 crc kubenswrapper[4580]: E0112 13:07:22.565866 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.571944 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.571981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.571991 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.572008 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.572022 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.576772 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.588981 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.599002 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.611048 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.620081 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.636618 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.646197 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.655890 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.665277 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.672821 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.674157 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.674213 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.674225 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.674249 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.674263 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.680702 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.693626 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68af564f5ecc4ca0c06683f7ec46ae5ffc5e2c4a9def47ed4048db3ba923f575\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:08Z\\\",\\\"message\\\":\\\"nkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:08Z is after 2025-08-24T17:21:41Z]\\\\nI0112 13:07:08.093732 5988 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0112 13:07:08.093731 5988 services_controller.go:443] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.93\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.702300 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.712159 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.721183 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.730363 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.740441 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.776177 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.776210 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.776220 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.776236 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.776248 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.878217 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.878250 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.878264 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.878280 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.878290 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.980520 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.980558 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.980570 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.980585 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:22 crc kubenswrapper[4580]: I0112 13:07:22.980607 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:22Z","lastTransitionTime":"2026-01-12T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.082413 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.082591 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.082675 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.082754 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.082822 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.185391 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.185422 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.185432 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.185442 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.185452 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.281644 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.281679 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:23 crc kubenswrapper[4580]: E0112 13:07:23.281773 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:23 crc kubenswrapper[4580]: E0112 13:07:23.281851 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.286796 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.286825 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.286834 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.286846 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.286857 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.387908 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.387932 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.387942 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.387951 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.387959 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.488984 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.489009 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.489017 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.489028 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.489038 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.569027 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/2.log" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.572263 4580 scope.go:117] "RemoveContainer" containerID="4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645" Jan 12 13:07:23 crc kubenswrapper[4580]: E0112 13:07:23.572389 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.584395 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.590683 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.590711 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.590721 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.590732 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.590741 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.595584 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.610964 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.618975 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.633033 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.643970 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.652417 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.663464 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.671928 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.680352 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.688729 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.692514 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.692555 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.692567 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.692587 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.692609 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.697206 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.705782 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.720620 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.731797 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.740240 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.747828 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.794624 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.794654 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.794664 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.794679 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.794691 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.896747 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.896776 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.896787 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.896821 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.896853 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.999099 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.999174 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.999186 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.999208 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:23 crc kubenswrapper[4580]: I0112 13:07:23.999221 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:23Z","lastTransitionTime":"2026-01-12T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.101062 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.101091 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.101122 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.101137 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.101146 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.203003 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.203159 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.203218 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.203279 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.203344 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.281368 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:24 crc kubenswrapper[4580]: E0112 13:07:24.281481 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.281365 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:24 crc kubenswrapper[4580]: E0112 13:07:24.281572 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.305684 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.305729 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.305743 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.305762 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.305775 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.407350 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.407378 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.407387 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.407399 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.407408 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.509374 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.509398 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.509407 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.509420 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.509440 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.611664 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.612008 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.612072 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.612151 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.612209 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.715623 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.715888 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.715965 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.716029 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.716092 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.817631 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.817798 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.817860 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.817942 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.818002 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.920755 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.920872 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.920963 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.921038 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:24 crc kubenswrapper[4580]: I0112 13:07:24.921092 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:24Z","lastTransitionTime":"2026-01-12T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.022558 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.022659 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.022716 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.022794 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.022849 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.124888 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.124941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.124954 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.124976 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.124991 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.227794 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.227903 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.227961 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.228031 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.228095 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.280740 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.280805 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:25 crc kubenswrapper[4580]: E0112 13:07:25.280944 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:25 crc kubenswrapper[4580]: E0112 13:07:25.281144 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.330595 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.330641 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.330653 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.330672 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.330684 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.432647 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.432700 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.432711 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.432736 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.432753 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.534660 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.534701 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.534712 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.534730 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.534740 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.636726 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.636767 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.636791 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.636812 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.637342 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.739390 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.739418 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.739426 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.739440 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.739448 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.841894 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.841938 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.841949 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.841966 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.841977 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.944406 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.944457 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.944466 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.944488 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:25 crc kubenswrapper[4580]: I0112 13:07:25.944500 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:25Z","lastTransitionTime":"2026-01-12T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.046288 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.046342 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.046352 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.046371 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.046383 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.148765 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.148815 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.148825 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.148846 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.148861 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.250437 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.250477 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.250487 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.250503 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.250513 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.281229 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.281277 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:26 crc kubenswrapper[4580]: E0112 13:07:26.281384 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:26 crc kubenswrapper[4580]: E0112 13:07:26.281475 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.352263 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.352293 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.352304 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.352318 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.352329 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.453980 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.454013 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.454024 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.454037 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.454047 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.556568 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.556624 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.556635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.556655 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.556670 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.658492 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.658552 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.658564 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.658586 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.658601 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.760231 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.760264 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.760291 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.760304 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.760314 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.862062 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.862092 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.862116 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.862128 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.862135 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.964255 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.964290 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.964300 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.964314 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:26 crc kubenswrapper[4580]: I0112 13:07:26.964326 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:26Z","lastTransitionTime":"2026-01-12T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.067346 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.067403 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.067414 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.067432 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.067445 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.170213 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.170259 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.170268 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.170285 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.170296 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.272183 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.272219 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.272232 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.272246 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.272257 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.281491 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.281548 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:27 crc kubenswrapper[4580]: E0112 13:07:27.281663 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:27 crc kubenswrapper[4580]: E0112 13:07:27.281747 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.374764 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.374812 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.374823 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.374838 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.374849 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.477243 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.477294 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.477308 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.477326 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.477339 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.579358 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.579397 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.579408 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.579421 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.579436 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.681438 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.681475 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.681484 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.681498 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.681509 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.783579 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.783627 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.783637 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.783651 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.783663 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.886245 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.886288 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.886296 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.886311 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.886322 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.987979 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.988015 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.988035 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.988065 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:27 crc kubenswrapper[4580]: I0112 13:07:27.988074 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:27Z","lastTransitionTime":"2026-01-12T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.089727 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.089763 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.089771 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.089783 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.089794 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.192404 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.192445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.192453 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.192466 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.192474 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.281452 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.281458 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:28 crc kubenswrapper[4580]: E0112 13:07:28.281568 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:28 crc kubenswrapper[4580]: E0112 13:07:28.281675 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.294664 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.294692 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.294700 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.294713 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.294721 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.399064 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.399120 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.399130 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.399143 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.399151 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.500959 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.501011 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.501020 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.501032 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.501042 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.603921 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.604008 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.604020 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.604048 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.604063 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.706751 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.706783 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.706792 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.706804 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.706813 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.809466 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.809511 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.809522 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.809540 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.809553 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.813404 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:28 crc kubenswrapper[4580]: E0112 13:07:28.813614 4580 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:28 crc kubenswrapper[4580]: E0112 13:07:28.813713 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs podName:5066d8fa-2cee-4764-a817-b819d3876638 nodeName:}" failed. No retries permitted until 2026-01-12 13:07:44.813689034 +0000 UTC m=+63.857907724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs") pod "network-metrics-daemon-jw27h" (UID: "5066d8fa-2cee-4764-a817-b819d3876638") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.912259 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.912296 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.912307 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.912323 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:28 crc kubenswrapper[4580]: I0112 13:07:28.912333 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:28Z","lastTransitionTime":"2026-01-12T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.014746 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.014801 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.014810 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.014828 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.014840 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.116668 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.116703 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.116714 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.116726 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.116735 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.218632 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.218682 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.218693 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.218708 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.218719 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.281331 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.281331 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:29 crc kubenswrapper[4580]: E0112 13:07:29.281512 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:29 crc kubenswrapper[4580]: E0112 13:07:29.281591 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.320704 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.320732 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.320740 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.320752 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.320764 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.421961 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.421988 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.421997 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.422007 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.422032 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.524599 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.524656 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.524670 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.524684 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.524694 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.626261 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.626301 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.626311 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.626333 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.626344 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.727725 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.727761 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.727771 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.727783 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.727793 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.829570 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.829621 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.829640 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.829653 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.829690 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.930873 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.930908 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.930920 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.930933 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:29 crc kubenswrapper[4580]: I0112 13:07:29.930944 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:29Z","lastTransitionTime":"2026-01-12T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.032820 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.032852 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.032861 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.032871 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.032878 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.134215 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.134246 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.134254 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.134266 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.134278 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.235853 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.235883 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.235895 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.235907 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.235914 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.281571 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.281742 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:30 crc kubenswrapper[4580]: E0112 13:07:30.281856 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:30 crc kubenswrapper[4580]: E0112 13:07:30.281994 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.337264 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.337300 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.337309 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.337321 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.337336 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.439410 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.439453 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.439464 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.439486 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.439499 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.541434 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.541487 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.541497 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.541512 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.541540 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.568567 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.577968 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.581244 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.594422 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.605056 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.613189 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.625765 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.634546 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.642769 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.643740 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.643789 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.643801 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.643820 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.643832 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.652141 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.659981 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.666682 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.673885 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.680445 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.689149 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.696943 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.704540 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.712840 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.725256 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:30Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.745959 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.746001 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.746014 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.746031 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.746042 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.848090 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.848143 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.848155 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.848171 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.848183 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.950034 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.950087 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.950112 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.950126 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:30 crc kubenswrapper[4580]: I0112 13:07:30.950136 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:30Z","lastTransitionTime":"2026-01-12T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.030021 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030095 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:03.030072254 +0000 UTC m=+82.074290964 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.030315 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.030366 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.030400 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.030432 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030501 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030525 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030539 4580 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030576 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030587 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:03.030571054 +0000 UTC m=+82.074789744 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030592 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030605 4580 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030617 4580 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030627 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:03.03062186 +0000 UTC m=+82.074840550 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030659 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:03.030651756 +0000 UTC m=+82.074870446 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030701 4580 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.030723 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:03.030717179 +0000 UTC m=+82.074935869 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.051857 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.051906 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.051918 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.051931 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.051944 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.154153 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.154181 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.154189 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.154203 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.154213 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.255937 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.255981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.255991 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.256003 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.256011 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.281611 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.281688 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.281721 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:31 crc kubenswrapper[4580]: E0112 13:07:31.281778 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.291592 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.300562 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.309185 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.320366 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.333131 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.339661 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.349979 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.357241 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.357263 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.357272 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.357285 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.357295 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.361118 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.371180 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.382405 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.398937 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.407876 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.417798 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.427257 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.436407 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.446748 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.454230 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.459642 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.459686 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.459699 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.459718 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.459731 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.461651 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.562828 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.562871 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.562882 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.562901 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.562913 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.665017 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.665056 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.665066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.665097 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.665122 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.767019 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.767368 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.767447 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.767567 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.767645 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.869885 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.870029 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.870124 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.870190 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.870242 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.972577 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.972627 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.972656 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.972674 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:31 crc kubenswrapper[4580]: I0112 13:07:31.972688 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:31Z","lastTransitionTime":"2026-01-12T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.074127 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.074176 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.074186 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.074211 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.074218 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.176493 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.176525 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.176535 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.176547 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.176559 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.278218 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.278247 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.278257 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.278268 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.278276 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.281408 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:32 crc kubenswrapper[4580]: E0112 13:07:32.281509 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.281570 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:32 crc kubenswrapper[4580]: E0112 13:07:32.281715 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.356711 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.356746 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.356754 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.356765 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.356774 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: E0112 13:07:32.366681 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.369617 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.369648 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.369657 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.369666 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.369674 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: E0112 13:07:32.379304 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.381699 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.381723 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.381732 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.381746 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.381755 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: E0112 13:07:32.390441 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.393047 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.393170 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.393185 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.393212 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.393228 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: E0112 13:07:32.405357 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.408018 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.408047 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.408055 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.408068 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.408077 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: E0112 13:07:32.416384 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:32 crc kubenswrapper[4580]: E0112 13:07:32.416489 4580 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.417673 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.417703 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.417712 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.417723 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.417732 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.519528 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.519583 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.519594 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.519616 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.519629 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.621877 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.621915 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.621925 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.621941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.621953 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.723674 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.723714 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.723724 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.723736 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.723746 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.825376 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.825415 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.825428 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.825445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.825458 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.927265 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.927303 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.927314 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.927330 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:32 crc kubenswrapper[4580]: I0112 13:07:32.927340 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:32Z","lastTransitionTime":"2026-01-12T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.029357 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.029393 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.029407 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.029420 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.029431 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.131181 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.131353 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.131426 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.131504 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.131563 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.233815 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.233854 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.233864 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.233879 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.233888 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.281513 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:33 crc kubenswrapper[4580]: E0112 13:07:33.281661 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.281789 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:33 crc kubenswrapper[4580]: E0112 13:07:33.281990 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.335665 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.335698 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.335708 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.335722 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.335734 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.437323 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.437420 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.437475 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.437553 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.437617 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.540302 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.540366 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.540377 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.540395 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.540407 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.641886 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.641926 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.641939 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.641962 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.641971 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.743846 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.743883 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.743893 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.743908 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.743920 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.846316 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.846639 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.846715 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.846790 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.846856 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.948581 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.948630 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.948651 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.948671 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:33 crc kubenswrapper[4580]: I0112 13:07:33.948686 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:33Z","lastTransitionTime":"2026-01-12T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.050047 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.050089 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.050152 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.050169 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.050181 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.152155 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.152197 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.152234 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.152247 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.152257 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.253787 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.253830 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.253840 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.253860 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.253895 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.281441 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.281441 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:34 crc kubenswrapper[4580]: E0112 13:07:34.281587 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:34 crc kubenswrapper[4580]: E0112 13:07:34.281636 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.355975 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.356031 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.356043 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.356056 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.356065 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.457911 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.457957 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.457967 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.457979 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.457991 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.560230 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.560282 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.560293 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.560306 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.560317 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.661550 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.661587 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.661596 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.661611 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.661621 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.764149 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.764200 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.764211 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.764234 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.764246 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.866434 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.866463 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.866474 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.866487 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.866497 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.968292 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.968325 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.968334 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.968346 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:34 crc kubenswrapper[4580]: I0112 13:07:34.968355 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:34Z","lastTransitionTime":"2026-01-12T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.070203 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.070310 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.070369 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.070450 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.070507 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.172427 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.172523 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.172585 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.172634 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.172697 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.274404 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.274429 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.274439 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.274449 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.274459 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.281247 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.281244 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:35 crc kubenswrapper[4580]: E0112 13:07:35.281374 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:35 crc kubenswrapper[4580]: E0112 13:07:35.281535 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.281723 4580 scope.go:117] "RemoveContainer" containerID="4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645" Jan 12 13:07:35 crc kubenswrapper[4580]: E0112 13:07:35.282007 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.376691 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.376728 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.376738 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.376758 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.376774 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.478856 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.478892 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.478902 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.478912 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.478922 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.581331 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.581368 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.581377 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.581394 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.581407 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.683368 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.683402 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.683413 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.683426 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.683437 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.785438 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.785465 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.785476 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.785485 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.785494 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.886975 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.887006 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.887014 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.887026 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.887037 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.988896 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.988929 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.988941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.988956 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:35 crc kubenswrapper[4580]: I0112 13:07:35.988968 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:35Z","lastTransitionTime":"2026-01-12T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.090894 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.090929 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.090943 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.090959 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.090968 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.193019 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.193045 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.193053 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.193065 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.193093 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.281342 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.281407 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:36 crc kubenswrapper[4580]: E0112 13:07:36.281471 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:36 crc kubenswrapper[4580]: E0112 13:07:36.281560 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.295031 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.295056 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.295066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.295078 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.295086 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.397132 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.397177 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.397186 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.397206 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.397217 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.498942 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.498999 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.499009 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.499032 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.499046 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.602178 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.602559 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.602572 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.602590 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.602602 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.704352 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.704410 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.704421 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.704437 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.704445 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.805852 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.805880 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.805941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.805954 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.805962 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.908150 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.908189 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.908231 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.908243 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:36 crc kubenswrapper[4580]: I0112 13:07:36.908251 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:36Z","lastTransitionTime":"2026-01-12T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.010283 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.010321 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.010334 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.010349 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.010359 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.112923 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.112962 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.112996 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.113011 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.113023 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.215005 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.215049 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.215058 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.215073 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.215088 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.281159 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:37 crc kubenswrapper[4580]: E0112 13:07:37.281327 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.281362 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:37 crc kubenswrapper[4580]: E0112 13:07:37.281528 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.317237 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.317273 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.317283 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.317297 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.317308 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.419451 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.419483 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.419492 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.419503 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.419514 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.521462 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.521497 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.521507 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.521521 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.521531 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.623973 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.624005 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.624014 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.624026 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.624034 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.725891 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.725958 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.725969 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.725997 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.726011 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.827728 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.827771 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.827783 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.827797 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.827807 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.929200 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.929240 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.929249 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.929279 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:37 crc kubenswrapper[4580]: I0112 13:07:37.929290 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:37Z","lastTransitionTime":"2026-01-12T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.031444 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.031499 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.031509 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.031530 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.031540 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.133312 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.133356 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.133366 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.133378 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.133389 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.236191 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.236216 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.236226 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.236237 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.236244 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.281016 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:38 crc kubenswrapper[4580]: E0112 13:07:38.281118 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.281260 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:38 crc kubenswrapper[4580]: E0112 13:07:38.281383 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.338336 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.338385 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.338400 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.338418 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.338431 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.440016 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.440050 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.440060 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.440094 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.440119 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.541327 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.541355 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.541364 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.541381 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.541389 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.643531 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.643570 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.643579 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.643592 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.643603 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.745173 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.745206 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.745214 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.745228 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.745237 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.846702 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.846737 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.846748 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.846759 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.846770 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.948701 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.948730 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.948739 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.948751 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:38 crc kubenswrapper[4580]: I0112 13:07:38.948759 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:38Z","lastTransitionTime":"2026-01-12T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.050354 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.050432 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.050442 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.050463 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.050475 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.152298 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.152336 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.152347 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.152361 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.152369 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.254384 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.254417 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.254427 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.254439 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.254448 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.281051 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.281070 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:39 crc kubenswrapper[4580]: E0112 13:07:39.281177 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:39 crc kubenswrapper[4580]: E0112 13:07:39.281228 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.356756 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.356786 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.356798 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.356812 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.356819 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.458441 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.458470 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.458481 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.458490 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.458500 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.560683 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.560795 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.560804 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.560813 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.560821 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.662211 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.662235 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.662277 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.662289 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.662297 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.764281 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.764309 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.764318 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.764328 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.764335 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.865907 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.865932 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.865942 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.865954 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.865962 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.967675 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.967716 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.967727 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.967743 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:39 crc kubenswrapper[4580]: I0112 13:07:39.967756 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:39Z","lastTransitionTime":"2026-01-12T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.069308 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.069341 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.069351 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.069365 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.069375 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.170865 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.170904 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.170915 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.170931 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.170943 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.272765 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.272795 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.272807 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.272818 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.272832 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.281376 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.281396 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:40 crc kubenswrapper[4580]: E0112 13:07:40.281475 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:40 crc kubenswrapper[4580]: E0112 13:07:40.281564 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.374878 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.374906 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.374917 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.374930 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.374940 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.477192 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.477225 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.477236 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.477252 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.477262 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.579066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.579092 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.579115 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.579128 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.579136 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.681502 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.681541 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.681553 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.681566 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.681575 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.783695 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.783761 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.783772 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.783785 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.783795 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.884932 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.884958 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.884966 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.884976 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.884984 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.986589 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.986611 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.986621 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.986632 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:40 crc kubenswrapper[4580]: I0112 13:07:40.986639 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:40Z","lastTransitionTime":"2026-01-12T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.088214 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.088247 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.088257 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.088269 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.088277 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.190408 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.190437 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.190445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.190456 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.190464 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.280769 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:41 crc kubenswrapper[4580]: E0112 13:07:41.280933 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.281211 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:41 crc kubenswrapper[4580]: E0112 13:07:41.281295 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.291818 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.291841 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.291849 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.291860 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.291871 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.293005 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.301617 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.308787 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.321737 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.329898 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.335954 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.343877 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.351992 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.359348 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.367415 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.379152 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.386058 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.394203 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.394326 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.394392 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.394469 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.394535 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.395501 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.404770 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.413591 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.425126 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.433936 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.442399 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:41Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.497512 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.497555 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.497567 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.497585 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.497598 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.600234 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.600277 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.600290 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.600324 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.600336 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.702397 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.702534 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.702596 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.702680 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.703120 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.805562 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.805611 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.805625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.805639 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.805651 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.907760 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.907813 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.907825 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.907838 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:41 crc kubenswrapper[4580]: I0112 13:07:41.907851 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:41Z","lastTransitionTime":"2026-01-12T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.010583 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.010620 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.010629 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.010646 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.010656 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.113467 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.113526 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.113535 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.113550 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.113560 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.215335 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.215368 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.215378 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.215393 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.215402 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.282022 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.282094 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:42 crc kubenswrapper[4580]: E0112 13:07:42.282709 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:42 crc kubenswrapper[4580]: E0112 13:07:42.282223 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.319576 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.319607 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.319617 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.319630 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.319641 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.421707 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.421764 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.421774 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.421791 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.421803 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.523935 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.523990 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.524000 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.524013 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.524023 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.559363 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.559396 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.559406 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.559436 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.559446 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: E0112 13:07:42.570312 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:42Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.573250 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.573299 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.573310 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.573320 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.573327 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: E0112 13:07:42.582015 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:42Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.584307 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.584333 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.584343 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.584353 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.584379 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: E0112 13:07:42.594171 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:42Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.596335 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.596365 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.596373 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.596385 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.596394 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: E0112 13:07:42.604743 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:42Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.606957 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.606994 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.607005 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.607022 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.607031 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: E0112 13:07:42.616223 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:42Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:42 crc kubenswrapper[4580]: E0112 13:07:42.616352 4580 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.625466 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.625496 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.625505 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.625517 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.625525 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.727240 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.727272 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.727281 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.727292 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.727300 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.828923 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.828955 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.828964 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.828977 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.828987 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.932055 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.932166 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.932176 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.932185 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:42 crc kubenswrapper[4580]: I0112 13:07:42.932196 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:42Z","lastTransitionTime":"2026-01-12T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.033644 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.033685 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.033695 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.033708 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.033719 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.135947 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.135979 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.135988 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.136018 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.136027 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.238007 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.238029 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.238037 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.238046 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.238054 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.280786 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:43 crc kubenswrapper[4580]: E0112 13:07:43.280893 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.280991 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:43 crc kubenswrapper[4580]: E0112 13:07:43.281145 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.340002 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.340027 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.340035 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.340048 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.340056 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.442146 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.442194 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.442205 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.442225 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.442238 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.544211 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.544250 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.544262 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.544278 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.544383 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.645680 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.645744 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.645760 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.645775 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.645786 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.747455 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.747487 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.747498 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.747510 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.747519 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.849891 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.849931 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.849941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.849956 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.849968 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.951888 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.951942 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.951951 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.951964 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:43 crc kubenswrapper[4580]: I0112 13:07:43.951973 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:43Z","lastTransitionTime":"2026-01-12T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.053844 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.053871 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.053880 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.053890 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.053898 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.156060 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.156093 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.156120 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.156135 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.156145 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.257995 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.258038 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.258049 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.258063 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.258073 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.280675 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:44 crc kubenswrapper[4580]: E0112 13:07:44.280792 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.280917 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:44 crc kubenswrapper[4580]: E0112 13:07:44.281166 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.360409 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.360444 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.360456 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.360472 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.360485 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.463219 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.463311 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.463324 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.463350 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.463369 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.565486 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.565533 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.565545 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.565565 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.565577 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.667927 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.667968 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.667980 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.667995 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.668010 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.769863 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.769897 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.769906 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.769917 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.769928 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.852781 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:44 crc kubenswrapper[4580]: E0112 13:07:44.852907 4580 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:44 crc kubenswrapper[4580]: E0112 13:07:44.852961 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs podName:5066d8fa-2cee-4764-a817-b819d3876638 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:16.852947677 +0000 UTC m=+95.897166356 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs") pod "network-metrics-daemon-jw27h" (UID: "5066d8fa-2cee-4764-a817-b819d3876638") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.871795 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.871844 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.871855 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.871869 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.871878 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.974129 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.974168 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.974178 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.974193 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:44 crc kubenswrapper[4580]: I0112 13:07:44.974203 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:44Z","lastTransitionTime":"2026-01-12T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.076074 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.076137 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.076147 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.076165 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.076177 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.178020 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.178050 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.178060 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.178074 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.178085 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.280216 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.280245 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.280253 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.280266 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.280277 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.281238 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.281261 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:45 crc kubenswrapper[4580]: E0112 13:07:45.281334 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:45 crc kubenswrapper[4580]: E0112 13:07:45.281407 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.381959 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.381989 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.382000 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.382012 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.382022 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.483854 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.483892 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.483901 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.483915 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.483928 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.586208 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.586246 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.586256 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.586268 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.586277 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.688375 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.688428 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.688442 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.688454 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.688463 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.790948 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.790974 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.790983 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.790996 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.791007 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.893258 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.893301 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.893313 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.893330 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.893340 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.996883 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.996931 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.996943 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.996960 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:45 crc kubenswrapper[4580]: I0112 13:07:45.996976 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:45Z","lastTransitionTime":"2026-01-12T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.099856 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.099909 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.099920 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.099932 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.099941 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.201931 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.202009 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.202037 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.202051 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.202060 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.280629 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:46 crc kubenswrapper[4580]: E0112 13:07:46.280758 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.280645 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:46 crc kubenswrapper[4580]: E0112 13:07:46.280842 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.303766 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.303805 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.303817 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.303831 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.303841 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.405847 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.405879 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.405888 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.405902 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.405914 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.507631 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.507696 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.507708 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.507741 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.507752 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.609661 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.609711 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.609720 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.609735 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.609745 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.637294 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/0.log" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.637341 4580 generic.go:334] "Generic (PLEG): container finished" podID="c8f39bcc-5a25-4746-988b-2251fd1be8c9" containerID="56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce" exitCode=1 Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.637369 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nnz5s" event={"ID":"c8f39bcc-5a25-4746-988b-2251fd1be8c9","Type":"ContainerDied","Data":"56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.637709 4580 scope.go:117] "RemoveContainer" containerID="56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.649804 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.661084 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.672147 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.684162 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.695451 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.706791 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.712476 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.712507 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.712516 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.712529 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.712539 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.721282 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"2026-01-12T13:07:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed\\\\n2026-01-12T13:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed to /host/opt/cni/bin/\\\\n2026-01-12T13:07:01Z [verbose] multus-daemon started\\\\n2026-01-12T13:07:01Z [verbose] Readiness Indicator file check\\\\n2026-01-12T13:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.736814 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.747327 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.758670 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.771864 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.783161 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.794701 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.811731 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.815007 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.815052 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.815066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.815089 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.815117 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.821646 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.830131 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.838077 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.846989 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:46Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.917452 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.917490 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.917520 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.917535 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:46 crc kubenswrapper[4580]: I0112 13:07:46.917547 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:46Z","lastTransitionTime":"2026-01-12T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.020635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.020826 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.020952 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.021082 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.021232 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.123446 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.124081 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.124182 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.124246 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.124309 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.226922 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.227010 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.227023 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.227042 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.227057 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.281627 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.281735 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:47 crc kubenswrapper[4580]: E0112 13:07:47.281793 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:47 crc kubenswrapper[4580]: E0112 13:07:47.281910 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.329278 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.329322 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.329331 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.329349 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.329362 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.431421 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.431465 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.431474 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.431489 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.431502 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.533378 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.533421 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.533434 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.533450 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.533460 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.641782 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.641844 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.641856 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.641869 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.641876 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.644617 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/0.log" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.644696 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nnz5s" event={"ID":"c8f39bcc-5a25-4746-988b-2251fd1be8c9","Type":"ContainerStarted","Data":"2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.660213 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.669981 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.682372 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.692702 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.704436 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.714631 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.725076 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.734783 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.743962 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.743989 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.743999 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.744016 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.744026 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.749428 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.757270 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.765692 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.774467 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.784866 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.794706 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.804855 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"2026-01-12T13:07:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed\\\\n2026-01-12T13:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed to /host/opt/cni/bin/\\\\n2026-01-12T13:07:01Z [verbose] multus-daemon started\\\\n2026-01-12T13:07:01Z [verbose] Readiness Indicator file check\\\\n2026-01-12T13:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.818638 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.826907 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.837801 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.846388 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.846429 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.846441 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.846458 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.846470 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.951133 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.951174 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.951186 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.951204 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:47 crc kubenswrapper[4580]: I0112 13:07:47.951215 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:47Z","lastTransitionTime":"2026-01-12T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.053006 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.053039 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.053048 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.053061 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.053071 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.154582 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.154625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.154634 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.154649 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.154659 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.256354 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.256379 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.256415 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.256426 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.256437 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.280834 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.280864 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:48 crc kubenswrapper[4580]: E0112 13:07:48.280945 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:48 crc kubenswrapper[4580]: E0112 13:07:48.281008 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.358177 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.358204 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.358214 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.358227 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.358237 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.460147 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.460175 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.460184 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.460215 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.460224 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.563212 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.563250 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.563262 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.563275 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.563285 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.664801 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.664845 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.664854 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.664875 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.664884 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.766706 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.766738 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.766749 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.766762 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.766775 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.868749 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.868781 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.868789 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.868802 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.868811 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.972301 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.972341 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.972351 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.972366 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:48 crc kubenswrapper[4580]: I0112 13:07:48.972376 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:48Z","lastTransitionTime":"2026-01-12T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.074397 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.074453 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.074464 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.074477 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.074485 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.181703 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.181833 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.181876 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.181920 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.181939 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.281243 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.281303 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:49 crc kubenswrapper[4580]: E0112 13:07:49.281453 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:49 crc kubenswrapper[4580]: E0112 13:07:49.282053 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.282439 4580 scope.go:117] "RemoveContainer" containerID="4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.284844 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.284874 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.284886 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.284898 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.284906 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.386512 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.386672 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.386776 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.386865 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.386947 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.489530 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.489577 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.489585 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.489603 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.489614 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.591736 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.591779 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.591790 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.591812 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.591824 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.651770 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/2.log" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.654258 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.654733 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.669651 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.684925 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.694129 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.694156 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.694165 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.694180 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.694193 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.695208 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.703317 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.713957 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.722739 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.730921 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.739416 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.752695 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.760118 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.771654 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.782727 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.793135 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.796768 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.796799 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.796808 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.796824 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.796836 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.805204 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"2026-01-12T13:07:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed\\\\n2026-01-12T13:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed to /host/opt/cni/bin/\\\\n2026-01-12T13:07:01Z [verbose] multus-daemon started\\\\n2026-01-12T13:07:01Z [verbose] Readiness Indicator file check\\\\n2026-01-12T13:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.813518 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.821505 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.829392 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.839861 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:49Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.901142 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.901182 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.901197 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.901216 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:49 crc kubenswrapper[4580]: I0112 13:07:49.901230 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:49Z","lastTransitionTime":"2026-01-12T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.003913 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.003959 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.003970 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.003992 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.004026 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.106362 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.106417 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.106430 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.106449 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.106463 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.208653 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.208698 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.208709 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.208720 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.208730 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.281404 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.281494 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:50 crc kubenswrapper[4580]: E0112 13:07:50.281525 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:50 crc kubenswrapper[4580]: E0112 13:07:50.281673 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.311119 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.311163 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.311176 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.311199 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.311211 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.414251 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.414284 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.414294 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.414313 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.414325 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.516421 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.516567 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.516647 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.516728 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.516785 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.619334 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.619378 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.619389 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.619411 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.619422 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.659511 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/3.log" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.660235 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/2.log" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.663536 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" exitCode=1 Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.663604 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.663672 4580 scope.go:117] "RemoveContainer" containerID="4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.664276 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:07:50 crc kubenswrapper[4580]: E0112 13:07:50.664482 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.677359 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.690047 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.699874 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.712163 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.722411 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.722444 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.722455 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.722491 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.722503 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.725166 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.736044 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.746639 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"2026-01-12T13:07:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed\\\\n2026-01-12T13:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed to /host/opt/cni/bin/\\\\n2026-01-12T13:07:01Z [verbose] multus-daemon started\\\\n2026-01-12T13:07:01Z [verbose] Readiness Indicator file check\\\\n2026-01-12T13:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.761269 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:50Z\\\",\\\"message\\\":\\\"or-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0112 13:07:50.006614 6607 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0112 13:07:50.006323 6607 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006638 6607 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006651 6607 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-hn77p in node crc\\\\nI0112 13:07:50.006657 6607 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p after 0 failed attempt(s)\\\\nI0112 13:07:50.006662 6607 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006395 6607 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-thp2h after 0 failed attempt(s)\\\\nI0112 13:07:50.006671 6607 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-thp2h\\\\nI0112 13:07:50.006340 6607 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.770398 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.779785 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.788897 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.800811 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.809212 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.824143 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.824796 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.824826 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.824837 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.824856 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.824866 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.838157 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.848721 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.858955 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.868200 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:50Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.927371 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.927399 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.927410 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.927425 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:50 crc kubenswrapper[4580]: I0112 13:07:50.927437 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:50Z","lastTransitionTime":"2026-01-12T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.029454 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.029509 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.029519 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.029540 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.029553 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.132128 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.132177 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.132190 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.132207 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.132219 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.234802 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.235551 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.235655 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.235740 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.235800 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.281236 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.281250 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:51 crc kubenswrapper[4580]: E0112 13:07:51.281369 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:51 crc kubenswrapper[4580]: E0112 13:07:51.281657 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.292836 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.304301 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.315252 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.330564 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.338184 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.338244 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.338256 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.338276 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.338292 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.342813 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.350623 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.364364 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.372730 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.381738 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.391018 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"2026-01-12T13:07:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed\\\\n2026-01-12T13:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed to /host/opt/cni/bin/\\\\n2026-01-12T13:07:01Z [verbose] multus-daemon started\\\\n2026-01-12T13:07:01Z [verbose] Readiness Indicator file check\\\\n2026-01-12T13:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.405030 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4221a1e3039d381cba4b4412d20dc0127ca6ec3794a5c1b61996a339e880d645\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:21Z\\\",\\\"message\\\":\\\"Ds:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.110\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0112 13:07:21.988162 6218 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:21Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:50Z\\\",\\\"message\\\":\\\"or-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0112 13:07:50.006614 6607 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0112 13:07:50.006323 6607 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006638 6607 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006651 6607 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-hn77p in node crc\\\\nI0112 13:07:50.006657 6607 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p after 0 failed attempt(s)\\\\nI0112 13:07:50.006662 6607 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006395 6607 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-thp2h after 0 failed attempt(s)\\\\nI0112 13:07:50.006671 6607 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-thp2h\\\\nI0112 13:07:50.006340 6607 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.413025 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.422757 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.432127 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.440264 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.440356 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.440442 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.440523 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.440596 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.441940 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.452728 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.461951 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.471064 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.543123 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.543166 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.543176 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.543192 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.543203 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.644867 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.644906 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.644920 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.644936 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.644947 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.669363 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/3.log" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.672654 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:07:51 crc kubenswrapper[4580]: E0112 13:07:51.672820 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.682341 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.691431 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.702391 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.710787 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.725518 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.735178 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.744533 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.746852 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.746887 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.746897 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.746918 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.746930 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.754443 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.764216 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.774072 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.782677 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.790743 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.801505 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.811818 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.821477 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.830613 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"2026-01-12T13:07:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed\\\\n2026-01-12T13:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed to /host/opt/cni/bin/\\\\n2026-01-12T13:07:01Z [verbose] multus-daemon started\\\\n2026-01-12T13:07:01Z [verbose] Readiness Indicator file check\\\\n2026-01-12T13:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.849542 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:50Z\\\",\\\"message\\\":\\\"or-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0112 13:07:50.006614 6607 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0112 13:07:50.006323 6607 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006638 6607 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006651 6607 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-hn77p in node crc\\\\nI0112 13:07:50.006657 6607 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p after 0 failed attempt(s)\\\\nI0112 13:07:50.006662 6607 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006395 6607 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-thp2h after 0 failed attempt(s)\\\\nI0112 13:07:50.006671 6607 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-thp2h\\\\nI0112 13:07:50.006340 6607 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.849910 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.849943 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.849953 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.849969 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.849980 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.857399 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:51Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.952803 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.952850 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.952864 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.952888 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:51 crc kubenswrapper[4580]: I0112 13:07:51.952900 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:51Z","lastTransitionTime":"2026-01-12T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.055851 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.055900 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.055912 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.055931 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.055943 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.158552 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.158590 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.158604 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.158625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.158646 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.261156 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.261197 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.261208 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.261223 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.261234 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.281440 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.281528 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:52 crc kubenswrapper[4580]: E0112 13:07:52.281721 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:52 crc kubenswrapper[4580]: E0112 13:07:52.281760 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.363990 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.364045 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.364058 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.364076 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.364089 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.466096 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.466154 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.466163 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.466180 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.466194 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.568134 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.568168 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.568177 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.568191 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.568200 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.670204 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.670232 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.670241 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.670252 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.670260 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.772445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.772493 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.772507 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.772524 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.772538 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.874263 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.874302 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.874312 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.874326 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.874340 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.904241 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.904273 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.904284 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.904299 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.904309 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: E0112 13:07:52.916263 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:52Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.919306 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.919335 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.919346 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.919357 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.919365 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: E0112 13:07:52.930174 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:52Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.933489 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.933522 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.933536 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.933549 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.933557 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: E0112 13:07:52.944071 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:52Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.947202 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.947234 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.947243 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.947255 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.947262 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: E0112 13:07:52.956439 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:52Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.959194 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.959226 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.959235 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.959248 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.959256 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:52 crc kubenswrapper[4580]: E0112 13:07:52.969156 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:07:52Z is after 2025-08-24T17:21:41Z" Jan 12 13:07:52 crc kubenswrapper[4580]: E0112 13:07:52.969258 4580 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.976455 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.976551 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.976604 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.976670 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:52 crc kubenswrapper[4580]: I0112 13:07:52.976737 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:52Z","lastTransitionTime":"2026-01-12T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.079089 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.079132 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.079144 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.079155 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.079164 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.181205 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.181247 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.181258 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.181270 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.181279 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.281325 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.281348 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:53 crc kubenswrapper[4580]: E0112 13:07:53.282338 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:53 crc kubenswrapper[4580]: E0112 13:07:53.282750 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.286130 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.286179 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.286196 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.286223 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.286239 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.388632 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.388677 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.388688 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.388703 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.388716 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.490564 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.490607 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.490617 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.490635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.490648 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.592794 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.592829 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.592838 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.592849 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.592861 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.695032 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.695068 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.695079 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.695093 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.695116 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.796933 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.796967 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.796978 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.796991 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.797001 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.899542 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.899590 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.899601 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.899616 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:53 crc kubenswrapper[4580]: I0112 13:07:53.899626 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:53Z","lastTransitionTime":"2026-01-12T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.001421 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.001487 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.001500 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.001527 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.001540 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.103531 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.103676 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.103741 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.103796 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.103853 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.205953 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.206055 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.206142 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.206222 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.206282 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.280718 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.280724 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:54 crc kubenswrapper[4580]: E0112 13:07:54.280880 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:54 crc kubenswrapper[4580]: E0112 13:07:54.281002 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.308608 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.308661 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.308674 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.308695 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.308707 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.411039 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.411080 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.411095 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.411130 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.411143 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.513349 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.513379 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.513388 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.513401 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.513411 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.616009 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.616039 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.616050 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.616060 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.616069 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.718568 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.718603 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.718612 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.718625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.718635 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.820896 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.820921 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.820929 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.820939 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.820949 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.922660 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.922686 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.922695 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.922706 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:54 crc kubenswrapper[4580]: I0112 13:07:54.922715 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:54Z","lastTransitionTime":"2026-01-12T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.024519 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.024551 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.024559 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.024572 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.024583 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.126667 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.126691 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.126699 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.126709 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.126717 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.228247 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.228278 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.228287 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.228298 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.228307 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.281318 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:55 crc kubenswrapper[4580]: E0112 13:07:55.281425 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.281436 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:55 crc kubenswrapper[4580]: E0112 13:07:55.281531 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.329702 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.329733 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.329743 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.329763 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.329771 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.431916 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.431949 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.431960 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.431991 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.432003 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.533540 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.533584 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.533597 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.533611 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.533626 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.635839 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.635876 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.635886 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.635897 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.635906 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.737913 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.737967 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.737981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.738001 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.738014 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.839817 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.839852 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.839862 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.839872 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.839880 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.941822 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.941880 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.941891 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.941904 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:55 crc kubenswrapper[4580]: I0112 13:07:55.941915 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:55Z","lastTransitionTime":"2026-01-12T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.043581 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.043644 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.043659 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.043675 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.043686 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.145325 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.145359 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.145369 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.145381 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.145390 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.247045 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.247076 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.247086 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.247128 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.247140 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.280787 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.280794 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:56 crc kubenswrapper[4580]: E0112 13:07:56.280912 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:56 crc kubenswrapper[4580]: E0112 13:07:56.281009 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.348943 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.349333 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.349396 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.349452 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.349515 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.451572 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.451601 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.451612 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.451642 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.451652 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.553289 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.553314 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.553325 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.553338 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.553354 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.655404 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.655430 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.655457 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.655475 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.655484 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.757658 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.757683 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.757692 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.757707 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.757715 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.859720 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.859744 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.859756 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.859776 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.859785 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.962025 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.962052 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.962060 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.962071 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:56 crc kubenswrapper[4580]: I0112 13:07:56.962080 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:56Z","lastTransitionTime":"2026-01-12T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.064215 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.064251 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.064260 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.064268 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.064276 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.166448 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.166475 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.166485 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.166495 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.166503 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.268548 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.268576 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.268585 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.268596 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.268604 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.281673 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:57 crc kubenswrapper[4580]: E0112 13:07:57.281821 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.281978 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:57 crc kubenswrapper[4580]: E0112 13:07:57.282184 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.369651 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.369678 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.369689 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.369699 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.369707 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.471718 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.471750 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.471760 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.471785 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.471795 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.573009 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.573033 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.573044 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.573057 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.573066 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.676122 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.676154 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.676187 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.676205 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.676216 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.778480 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.778535 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.778545 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.778565 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.778578 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.881060 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.881092 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.881121 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.881134 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.881143 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.983509 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.983540 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.983548 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.983563 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:57 crc kubenswrapper[4580]: I0112 13:07:57.983571 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:57Z","lastTransitionTime":"2026-01-12T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.085298 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.085415 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.085484 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.085543 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.085605 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.187795 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.187898 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.187957 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.188032 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.188121 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.281383 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.281471 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:07:58 crc kubenswrapper[4580]: E0112 13:07:58.281643 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:07:58 crc kubenswrapper[4580]: E0112 13:07:58.281775 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.289637 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.289750 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.289829 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.289892 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.289957 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.392003 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.392038 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.392048 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.392061 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.392072 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.494140 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.494178 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.494190 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.494203 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.494214 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.596087 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.596153 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.596167 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.596182 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.596191 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.697742 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.697780 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.697791 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.697811 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.697823 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.799777 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.799820 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.799831 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.799842 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.799850 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.901937 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.901976 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.901985 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.901999 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:58 crc kubenswrapper[4580]: I0112 13:07:58.902007 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:58Z","lastTransitionTime":"2026-01-12T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.004126 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.004156 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.004166 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.004179 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.004188 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.106246 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.106522 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.106606 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.106681 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.106741 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.208916 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.209155 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.209225 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.209289 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.209352 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.281538 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.281646 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:07:59 crc kubenswrapper[4580]: E0112 13:07:59.281789 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:07:59 crc kubenswrapper[4580]: E0112 13:07:59.281896 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.310569 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.310594 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.310603 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.310614 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.310622 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.412425 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.412900 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.412968 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.413038 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.413124 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.515362 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.515396 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.515406 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.515419 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.515428 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.617230 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.617264 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.617276 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.617290 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.617299 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.718584 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.718619 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.718628 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.718639 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.718648 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.819580 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.819624 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.819635 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.819648 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.819659 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.921220 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.921243 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.921252 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.921263 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:07:59 crc kubenswrapper[4580]: I0112 13:07:59.921271 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:07:59Z","lastTransitionTime":"2026-01-12T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.023034 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.023068 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.023076 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.023090 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.023115 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.124593 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.124624 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.124632 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.124644 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.124654 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.226966 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.227001 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.227011 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.227023 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.227033 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.280759 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:00 crc kubenswrapper[4580]: E0112 13:08:00.280865 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.280767 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:00 crc kubenswrapper[4580]: E0112 13:08:00.280959 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.328675 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.328715 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.328732 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.328747 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.328759 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.430680 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.430715 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.430724 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.430736 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.430744 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.532337 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.532366 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.532375 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.532384 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.532391 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.633583 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.633613 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.633621 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.633631 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.633639 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.735406 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.735443 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.735452 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.735465 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.735474 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.837436 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.837470 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.837479 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.837491 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.837500 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.939153 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.939189 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.939197 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.939210 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:00 crc kubenswrapper[4580]: I0112 13:08:00.939219 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:00Z","lastTransitionTime":"2026-01-12T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.040711 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.040746 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.040755 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.040768 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.040778 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.142516 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.142547 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.142556 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.142567 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.142576 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.243957 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.243984 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.243995 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.244006 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.244014 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.281511 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.281555 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:01 crc kubenswrapper[4580]: E0112 13:08:01.281640 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:01 crc kubenswrapper[4580]: E0112 13:08:01.281732 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.294700 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.302724 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.311494 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.318954 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.330921 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.337537 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8ch98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f20fb33-a98a-4b04-81b9-5ea16ae9f57c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://643e92b14688d35a567c7351e9231a8855ec7d9704cc97466c2d901c4525108a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nbmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8ch98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.343923 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jw27h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5066d8fa-2cee-4764-a817-b819d3876638\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbqm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jw27h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.346048 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.346070 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.346078 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.346089 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.346098 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.351709 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5b34839-7efb-4fe1-ab7f-7d5b1edbf09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d5f5c5f418e2ffb24aff3f3056f26725003da15b14ea3f503039403320803a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afcaaf941d0811f34d5bb6d98ebedbeca17d15c8ce48a5604758570aa393d700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1f9fb31f42b2e87cf98227241e7c66b834d473dc625999d5cf28df80b5076b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ad5c9f5a1a0f3219b48e7c2b703985f426f1e068b12812f208e5843a98224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.361347 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14cae238-29c1-4657-b3f0-6a834484f48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b813e14b2e613be951c247a67eb9b5b29604c639ec2c8a26c652911e0a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://259d2e205fd4a46e432a91b0e09646a58b44d6da55b06c6d4ac87010c85babc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00bb60e0955774504f186a916e89495432d2ea6a6b01cadbbe0cc6871383a030\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.369802 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.377969 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nnz5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f39bcc-5a25-4746-988b-2251fd1be8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:46Z\\\",\\\"message\\\":\\\"2026-01-12T13:07:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed\\\\n2026-01-12T13:07:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6c82134a-0bcc-44ea-baee-ed00a5b086ed to /host/opt/cni/bin/\\\\n2026-01-12T13:07:01Z [verbose] multus-daemon started\\\\n2026-01-12T13:07:01Z [verbose] Readiness Indicator file check\\\\n2026-01-12T13:07:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m82m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nnz5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.389666 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd4e0810-eddb-47f5-a7dc-beed7b545112\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-12T13:07:50Z\\\",\\\"message\\\":\\\"or-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0112 13:07:50.006614 6607 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0112 13:07:50.006323 6607 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006638 6607 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006651 6607 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-hn77p in node crc\\\\nI0112 13:07:50.006657 6607 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-hn77p after 0 failed attempt(s)\\\\nI0112 13:07:50.006662 6607 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-hn77p\\\\nI0112 13:07:50.006395 6607 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-thp2h after 0 failed attempt(s)\\\\nI0112 13:07:50.006671 6607 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-thp2h\\\\nI0112 13:07:50.006340 6607 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4wmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hn77p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.396140 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thp2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0adac83c-1303-404f-85a1-c7b477da2226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a871f86fe29e275615cf2f7f0130151c5ed56d410a0f18f5267adf08be33f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfhs8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thp2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.404876 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9730289-8e50-4a9a-b474-db6c268d5a30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-12T13:06:58Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0112 13:06:53.362253 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0112 13:06:53.363131 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2861103618/tls.crt::/tmp/serving-cert-2861103618/tls.key\\\\\\\"\\\\nI0112 13:06:58.635258 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0112 13:06:58.636943 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0112 13:06:58.636960 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0112 13:06:58.636978 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0112 13:06:58.636983 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0112 13:06:58.642885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0112 13:06:58.642904 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0112 13:06:58.642919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0112 13:06:58.642928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0112 13:06:58.642931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0112 13:06:58.642934 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0112 13:06:58.642937 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0112 13:06:58.645379 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.412489 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.419704 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.429437 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.436663 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:01Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.448056 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.448173 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.448255 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.448332 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.448399 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.550633 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.550661 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.550671 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.550683 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.550691 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.652077 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.652126 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.652136 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.652148 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.652157 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.756469 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.756809 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.756827 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.756847 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.756857 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.858665 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.858694 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.858704 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.858724 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.858735 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.960204 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.960230 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.960239 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.960248 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:01 crc kubenswrapper[4580]: I0112 13:08:01.960255 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:01Z","lastTransitionTime":"2026-01-12T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.062182 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.062218 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.062226 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.062236 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.062245 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.164417 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.164449 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.164458 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.164471 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.164482 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.266587 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.266618 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.266628 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.266639 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.266647 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.280992 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.281005 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:02 crc kubenswrapper[4580]: E0112 13:08:02.281076 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:02 crc kubenswrapper[4580]: E0112 13:08:02.281174 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.367990 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.368032 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.368040 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.368054 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.368064 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.469989 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.470021 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.470030 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.470041 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.470051 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.572040 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.572072 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.572082 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.572094 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.572116 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.674500 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.674536 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.674547 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.674562 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.674577 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.776298 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.776337 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.776348 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.776362 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.776376 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.877811 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.877841 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.877860 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.877872 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.877880 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.982164 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.982212 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.982222 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.982234 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:02 crc kubenswrapper[4580]: I0112 13:08:02.982258 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:02Z","lastTransitionTime":"2026-01-12T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.084554 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.084584 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.084591 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.084602 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.084613 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.108912 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.108973 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.109018 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.109045 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.109066 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109091 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:07.10906334 +0000 UTC m=+146.153282050 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109178 4580 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109218 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:09:07.10920603 +0000 UTC m=+146.153424730 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109231 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109244 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109255 4580 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109283 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-12 13:09:07.10927517 +0000 UTC m=+146.153493861 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109325 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109334 4580 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109343 4580 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109361 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-12 13:09:07.109356625 +0000 UTC m=+146.153575315 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109387 4580 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.109405 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-12 13:09:07.109400358 +0000 UTC m=+146.153619048 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.185997 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.186133 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.186204 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.186271 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.186336 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.280700 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.280719 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.280753 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.280950 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.280999 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.281010 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.281076 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.281087 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.281119 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.288444 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.291444 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.294318 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.294345 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.294353 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.294364 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.294372 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.302492 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.304756 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.304780 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.304789 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.304799 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.304807 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.312166 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.314570 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.314596 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.314608 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.314619 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.314628 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.322778 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.325359 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.325396 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.325407 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.325416 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.325429 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.334182 4580 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-12T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b4cb507-f154-474c-bea1-057456e7be91\\\",\\\"systemUUID\\\":\\\"f50d9485-f990-498d-a5ee-4bb4dd1663df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:03 crc kubenswrapper[4580]: E0112 13:08:03.334297 4580 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.335311 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.335342 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.335351 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.335385 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.335394 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.436600 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.436626 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.436636 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.436650 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.436659 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.538285 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.538313 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.538322 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.538333 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.538342 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.640373 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.640427 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.640436 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.640445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.640452 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.741608 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.741645 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.741654 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.741667 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.741678 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.843325 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.843354 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.843362 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.843373 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.843381 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.945082 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.945224 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.945232 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.945258 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:03 crc kubenswrapper[4580]: I0112 13:08:03.945266 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:03Z","lastTransitionTime":"2026-01-12T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.046746 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.046891 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.046969 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.047043 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.047124 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.148989 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.149080 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.149174 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.149244 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.149305 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.251164 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.251197 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.251210 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.251223 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.251234 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.281395 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.281397 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:04 crc kubenswrapper[4580]: E0112 13:08:04.281743 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.281905 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:08:04 crc kubenswrapper[4580]: E0112 13:08:04.281917 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:04 crc kubenswrapper[4580]: E0112 13:08:04.282022 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.352932 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.352965 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.352973 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.352985 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.352995 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.455179 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.455212 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.455220 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.455233 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.455241 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.557084 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.557543 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.557625 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.557685 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.557753 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.659370 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.659400 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.659407 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.659419 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.659428 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.761258 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.761381 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.761455 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.761543 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.761731 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.864039 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.864232 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.864320 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.864399 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.864480 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.966758 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.966792 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.966804 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.966815 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:04 crc kubenswrapper[4580]: I0112 13:08:04.966823 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:04Z","lastTransitionTime":"2026-01-12T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.069010 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.069060 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.069070 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.069090 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.069118 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.170391 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.170422 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.170430 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.170445 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.170455 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.272318 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.272367 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.272375 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.272385 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.272393 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.280686 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.280760 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:05 crc kubenswrapper[4580]: E0112 13:08:05.280843 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:05 crc kubenswrapper[4580]: E0112 13:08:05.280933 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.374172 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.374209 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.374219 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.374231 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.374243 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.475911 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.475941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.475949 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.475961 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.475969 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.577238 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.577292 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.577301 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.577313 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.577322 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.678918 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.678943 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.678953 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.678964 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.678973 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.781342 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.781379 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.781388 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.781401 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.781410 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.883616 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.887364 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.887394 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.887433 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.887450 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.989692 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.989719 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.989728 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.989741 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:05 crc kubenswrapper[4580]: I0112 13:08:05.989749 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:05Z","lastTransitionTime":"2026-01-12T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.091705 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.091745 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.091754 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.091763 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.091773 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.193805 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.193871 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.193881 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.193901 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.193911 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.281065 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.281160 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:06 crc kubenswrapper[4580]: E0112 13:08:06.281287 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:06 crc kubenswrapper[4580]: E0112 13:08:06.281421 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.295505 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.295532 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.295541 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.295554 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.295562 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.397699 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.397722 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.397731 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.397741 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.397758 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.499387 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.499418 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.499429 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.499440 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.499450 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.600932 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.600981 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.600992 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.601002 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.601010 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.703147 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.703184 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.703192 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.703207 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.703219 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.805231 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.805254 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.805263 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.805273 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.805282 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.907601 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.907713 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.907786 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.907859 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:06 crc kubenswrapper[4580]: I0112 13:08:06.907925 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:06Z","lastTransitionTime":"2026-01-12T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.009941 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.009969 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.009978 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.009988 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.009997 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.111572 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.111752 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.111890 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.111975 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.112035 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.214257 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.214287 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.214296 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.214307 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.214316 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.281256 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:07 crc kubenswrapper[4580]: E0112 13:08:07.281380 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.281388 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:07 crc kubenswrapper[4580]: E0112 13:08:07.281486 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.315359 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.315386 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.315393 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.315403 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.315413 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.416471 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.416499 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.416509 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.416519 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.416526 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.518656 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.518704 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.518719 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.518733 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.518742 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.619988 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.620027 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.620036 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.620045 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.620054 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.722253 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.722278 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.722285 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.722293 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.722301 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.824442 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.824472 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.824482 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.824502 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.824511 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.926629 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.926664 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.926675 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.926690 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:07 crc kubenswrapper[4580]: I0112 13:08:07.926699 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:07Z","lastTransitionTime":"2026-01-12T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.028702 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.028733 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.028741 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.028772 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.028802 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.130499 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.130519 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.130527 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.130543 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.130552 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.232132 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.232153 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.232163 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.232173 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.232182 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.281149 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:08 crc kubenswrapper[4580]: E0112 13:08:08.281227 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.281305 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:08 crc kubenswrapper[4580]: E0112 13:08:08.281390 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.333538 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.333593 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.333605 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.333618 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.333628 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.434778 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.434806 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.434813 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.434823 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.434831 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.536720 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.536753 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.536762 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.536774 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.536783 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.638589 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.638629 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.638637 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.638651 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.638663 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.740746 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.740788 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.740797 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.740812 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.740822 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.842837 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.842876 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.842887 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.842899 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.842908 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.944442 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.944479 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.944489 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.944502 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:08 crc kubenswrapper[4580]: I0112 13:08:08.944514 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:08Z","lastTransitionTime":"2026-01-12T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.046590 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.046631 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.046639 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.046649 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.046657 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.148491 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.148526 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.148556 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.148568 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.148576 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.249917 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.249950 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.249961 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.249973 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.249982 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.280962 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:09 crc kubenswrapper[4580]: E0112 13:08:09.281056 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.281176 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:09 crc kubenswrapper[4580]: E0112 13:08:09.281304 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.351510 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.351571 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.351582 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.351595 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.351605 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.453421 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.453456 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.453465 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.453480 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.453490 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.555014 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.555038 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.555048 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.555058 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.555066 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.656864 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.656889 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.656898 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.656909 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.656916 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.758189 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.758216 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.758224 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.758236 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.758245 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.860478 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.860608 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.860688 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.860764 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.860853 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.962397 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.962448 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.962458 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.962471 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:09 crc kubenswrapper[4580]: I0112 13:08:09.962480 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:09Z","lastTransitionTime":"2026-01-12T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.063585 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.063619 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.063628 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.063642 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.063652 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.165512 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.165698 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.165792 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.165862 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.165916 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.267662 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.267702 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.267711 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.267721 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.267729 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.281317 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:10 crc kubenswrapper[4580]: E0112 13:08:10.281490 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.281326 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:10 crc kubenswrapper[4580]: E0112 13:08:10.281693 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.369302 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.369326 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.369334 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.369345 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.369353 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.471411 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.471585 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.471646 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.471721 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.471793 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.573373 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.573492 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.573556 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.573620 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.573686 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.675350 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.675379 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.675387 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.675400 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.675409 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.776920 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.776978 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.776987 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.776996 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.777022 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.879778 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.879821 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.879831 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.879846 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.879861 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.982185 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.982215 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.982225 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.982238 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:10 crc kubenswrapper[4580]: I0112 13:08:10.982245 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:10Z","lastTransitionTime":"2026-01-12T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.083881 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.083914 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.083924 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.083935 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.083943 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.185791 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.185823 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.185832 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.185842 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.185851 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.280888 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.281301 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:11 crc kubenswrapper[4580]: E0112 13:08:11.281476 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:11 crc kubenswrapper[4580]: E0112 13:08:11.281604 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.286729 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.286756 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.286765 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.286776 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.286783 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.292796 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2223aac-784e-4653-8939-fcbd18c70ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81fbec7b59dcc9c80a97b122e2b0e738fbbfb3eafca1bf9989fe743f28573191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1dc0fffc41810cdb9a5eeb53b19f6a23d70a8133c6e12b19df575f86a55d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab60600011f08831d514dad04b97fb6b587736b18b55b1bff9a33143b9a92997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff2709a93c305db448fb509fbbdf606c297b26f1ae08e6b9b05933c155f59416\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88f8708217fbcbf532b977d30ab903955722d04a00ba29ded44ce09610140e27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5844c48078cc7d6868f4ff81ac1a2bb878892529b11823ecabd49fad4aed60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d2e02e66890bca8171c7112c74521a43c3458f07890228426f04c2bdfad4599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:07:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcrjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2p6r8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.300690 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61051313-b754-4528-ade6-ffacbebafb8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a321f1ea1e9a558494aa66641fd251a100e0bdceddf5b2034bfa067c23555138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a151ee487ef6c2e5141ec5a25b8b7e468c224b262fd09538db0e939b8cf95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsss4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:07:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vmmdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.308042 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8cac12-1e6a-4a4a-ae3f-5889f4afa7dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc9529b959d5f791fccd83f001f142471328c468307cab794dfa65420bd9c2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f66cd35f34bfe89fb3152f0fbd65fc1dac84795ef724fb2b38ea49da1455c5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f66cd35f34bfe89fb3152f0fbd65fc1dac84795ef724fb2b38ea49da1455c5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.315588 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a82c47afb3ec7afc7fa35ff0e1e85e288f9e1a908459024005a16c0c8f3b0050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.322385 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3accce5d840e81a67e212ff934059ad73525c6ff3c73ed6ab4c6e2289a4d7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whmh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdz6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.330045 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.337173 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.349054 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35b1ac8c-9d11-4c54-98ab-fa848030f05e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-12T13:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1088ffa1a5bf02ca8606518a6f8c9cbeba544651dfafbb34e8860c2a12ffc1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c98177e2b081aadb6fd03620e308bb5d9ff403f1498eb875f7cf6d836dd23aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea93cd026e7a60c22105833d2c3ada192fc16d45f46e5c9ce2652e94f92fab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c811167080fb15b5c19b8b57f76f4b8c5b2ed87d43d1b320ad024683ab58b65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14411e27d1e7de0627ca0d6f0ecbca70787ef8e9311ff3ffbb923da942e47955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200ede5d7f69bb74d8e7d1b5081850d73057f7aef07049cab7a4dd1382de0cfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04470dc724661e24dc43e182f9c5dc106623e8dfb269280e6dc0fc0710f6a4a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da31efcbced890b1046b1f058c1c00e4d2788162749c1da32d87c8b59360aa58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-12T13:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-12T13:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-12T13:06:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.358463 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fb543f1489aa79642944188788308013ed9b6bacb720a3ee689b376cbc6a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.366459 4580 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-12T13:07:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e120eaa6bd8e36a0bc509f7877252fbf4b0cebb89222dd193f75502e472fa7af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f05ca3c8a1887284f1162c44d1b917ad955eb8d77b816e830caddffdf0430383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-12T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-12T13:08:11Z is after 2025-08-24T17:21:41Z" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.380677 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.380668961 podStartE2EDuration="41.380668961s" podCreationTimestamp="2026-01-12 13:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:11.38060503 +0000 UTC m=+90.424823721" watchObservedRunningTime="2026-01-12 13:08:11.380668961 +0000 UTC m=+90.424887651" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.388647 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.388669 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.388677 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.388691 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.388701 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.395801 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8ch98" podStartSLOduration=72.395791435 podStartE2EDuration="1m12.395791435s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:11.389122292 +0000 UTC m=+90.433340982" watchObservedRunningTime="2026-01-12 13:08:11.395791435 +0000 UTC m=+90.440010125" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.404931 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-nnz5s" podStartSLOduration=72.404916437 podStartE2EDuration="1m12.404916437s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:11.404722269 +0000 UTC m=+90.448940959" watchObservedRunningTime="2026-01-12 13:08:11.404916437 +0000 UTC m=+90.449135127" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.426900 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-thp2h" podStartSLOduration=71.426890468 podStartE2EDuration="1m11.426890468s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:11.426053634 +0000 UTC m=+90.470272325" watchObservedRunningTime="2026-01-12 13:08:11.426890468 +0000 UTC m=+90.471109158" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.437538 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.437527081 podStartE2EDuration="1m12.437527081s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:11.437264053 +0000 UTC m=+90.481482743" watchObservedRunningTime="2026-01-12 13:08:11.437527081 +0000 UTC m=+90.481745771" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.455904 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=70.455894607 podStartE2EDuration="1m10.455894607s" podCreationTimestamp="2026-01-12 13:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:11.455464101 +0000 UTC m=+90.499682791" watchObservedRunningTime="2026-01-12 13:08:11.455894607 +0000 UTC m=+90.500113296" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.490699 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.490730 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.490740 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.490753 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.490762 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.593265 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.593306 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.593316 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.593329 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.593338 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.694877 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.694901 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.694909 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.694920 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.694930 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.796529 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.796568 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.796576 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.796590 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.796600 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.898608 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.898720 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.898783 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.898858 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:11 crc kubenswrapper[4580]: I0112 13:08:11.898924 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:11Z","lastTransitionTime":"2026-01-12T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.000423 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.000447 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.000456 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.000467 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.000475 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.101697 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.101829 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.101899 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.101964 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.102034 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.203441 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.203477 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.203487 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.203501 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.203510 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.280769 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:12 crc kubenswrapper[4580]: E0112 13:08:12.280872 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.281026 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:12 crc kubenswrapper[4580]: E0112 13:08:12.281191 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.305248 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.305280 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.305291 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.305304 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.305313 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.406867 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.406900 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.406911 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.406923 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.406931 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.509087 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.509139 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.509149 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.509160 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.509168 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.610863 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.610890 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.610900 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.610910 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.610918 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.712879 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.712909 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.712919 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.712930 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.712940 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.814397 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.814512 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.814580 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.814653 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.814707 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.916835 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.916871 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.916882 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.916896 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:12 crc kubenswrapper[4580]: I0112 13:08:12.916904 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:12Z","lastTransitionTime":"2026-01-12T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.018033 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.018063 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.018071 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.018081 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.018089 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:13Z","lastTransitionTime":"2026-01-12T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.119841 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.119870 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.119880 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.119892 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.119900 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:13Z","lastTransitionTime":"2026-01-12T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.222034 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.222055 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.222063 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.222073 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.222081 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:13Z","lastTransitionTime":"2026-01-12T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.281704 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:13 crc kubenswrapper[4580]: E0112 13:08:13.281848 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.281896 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:13 crc kubenswrapper[4580]: E0112 13:08:13.282241 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.324066 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.324088 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.324096 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.324125 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.324134 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:13Z","lastTransitionTime":"2026-01-12T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.425929 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.425962 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.425972 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.425983 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.425999 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:13Z","lastTransitionTime":"2026-01-12T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.509788 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.509818 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.509827 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.509837 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.509847 4580 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-12T13:08:13Z","lastTransitionTime":"2026-01-12T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.541685 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc"] Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.542013 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.543418 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.543717 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.543759 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.544547 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.554573 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=10.554561332 podStartE2EDuration="10.554561332s" podCreationTimestamp="2026-01-12 13:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:13.554139705 +0000 UTC m=+92.598358395" watchObservedRunningTime="2026-01-12 13:08:13.554561332 +0000 UTC m=+92.598780023" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.569492 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podStartSLOduration=74.569482645 podStartE2EDuration="1m14.569482645s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:13.569411961 +0000 UTC m=+92.613630652" watchObservedRunningTime="2026-01-12 13:08:13.569482645 +0000 UTC m=+92.613701335" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.580662 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2p6r8" podStartSLOduration=74.580649522 podStartE2EDuration="1m14.580649522s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:13.580087157 +0000 UTC m=+92.624305847" watchObservedRunningTime="2026-01-12 13:08:13.580649522 +0000 UTC m=+92.624868212" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.584164 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.584212 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.584228 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.584244 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.584261 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.589433 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vmmdr" podStartSLOduration=73.589421074 podStartE2EDuration="1m13.589421074s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:13.589034112 +0000 UTC m=+92.633252801" watchObservedRunningTime="2026-01-12 13:08:13.589421074 +0000 UTC m=+92.633639764" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.616874 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.616861212 podStartE2EDuration="1m10.616861212s" podCreationTimestamp="2026-01-12 13:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:13.614553594 +0000 UTC m=+92.658772284" watchObservedRunningTime="2026-01-12 13:08:13.616861212 +0000 UTC m=+92.661079902" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.684666 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.684717 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.684734 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.684752 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.684767 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.684799 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.684822 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.685576 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.689643 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.697575 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5466f611-6e35-47a2-97ae-eb2d1c5afd2b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bpghc\" (UID: \"5466f611-6e35-47a2-97ae-eb2d1c5afd2b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:13 crc kubenswrapper[4580]: I0112 13:08:13.852914 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" Jan 12 13:08:14 crc kubenswrapper[4580]: I0112 13:08:14.281400 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:14 crc kubenswrapper[4580]: I0112 13:08:14.281423 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:14 crc kubenswrapper[4580]: E0112 13:08:14.281993 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:14 crc kubenswrapper[4580]: E0112 13:08:14.281909 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:14 crc kubenswrapper[4580]: I0112 13:08:14.729575 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" event={"ID":"5466f611-6e35-47a2-97ae-eb2d1c5afd2b","Type":"ContainerStarted","Data":"76c4187aec917fb84b565ff4990804ac050bd558612a5fe16735d1473aa8c6b5"} Jan 12 13:08:14 crc kubenswrapper[4580]: I0112 13:08:14.729622 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" event={"ID":"5466f611-6e35-47a2-97ae-eb2d1c5afd2b","Type":"ContainerStarted","Data":"63e02389388612f2f7dd4ed73a7f761330d1acd7aeebd177cbba07337fb15b24"} Jan 12 13:08:14 crc kubenswrapper[4580]: I0112 13:08:14.740141 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bpghc" podStartSLOduration=75.740132508 podStartE2EDuration="1m15.740132508s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:14.738932417 +0000 UTC m=+93.783151117" watchObservedRunningTime="2026-01-12 13:08:14.740132508 +0000 UTC m=+93.784351198" Jan 12 13:08:15 crc kubenswrapper[4580]: I0112 13:08:15.280979 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:15 crc kubenswrapper[4580]: I0112 13:08:15.281025 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:15 crc kubenswrapper[4580]: E0112 13:08:15.281091 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:15 crc kubenswrapper[4580]: E0112 13:08:15.281252 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:16 crc kubenswrapper[4580]: I0112 13:08:16.280904 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:16 crc kubenswrapper[4580]: I0112 13:08:16.280911 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:16 crc kubenswrapper[4580]: E0112 13:08:16.281002 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:16 crc kubenswrapper[4580]: E0112 13:08:16.281131 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:16 crc kubenswrapper[4580]: I0112 13:08:16.906631 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:16 crc kubenswrapper[4580]: E0112 13:08:16.906749 4580 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:08:16 crc kubenswrapper[4580]: E0112 13:08:16.906800 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs podName:5066d8fa-2cee-4764-a817-b819d3876638 nodeName:}" failed. No retries permitted until 2026-01-12 13:09:20.906785247 +0000 UTC m=+159.951003938 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs") pod "network-metrics-daemon-jw27h" (UID: "5066d8fa-2cee-4764-a817-b819d3876638") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 12 13:08:17 crc kubenswrapper[4580]: I0112 13:08:17.281623 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:17 crc kubenswrapper[4580]: I0112 13:08:17.281650 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:17 crc kubenswrapper[4580]: E0112 13:08:17.281818 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:17 crc kubenswrapper[4580]: E0112 13:08:17.281864 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:18 crc kubenswrapper[4580]: I0112 13:08:18.281216 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:18 crc kubenswrapper[4580]: E0112 13:08:18.281309 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:18 crc kubenswrapper[4580]: I0112 13:08:18.281233 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:18 crc kubenswrapper[4580]: E0112 13:08:18.281379 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:19 crc kubenswrapper[4580]: I0112 13:08:19.281176 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:19 crc kubenswrapper[4580]: E0112 13:08:19.281282 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:19 crc kubenswrapper[4580]: I0112 13:08:19.281665 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:19 crc kubenswrapper[4580]: E0112 13:08:19.281873 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:19 crc kubenswrapper[4580]: I0112 13:08:19.281902 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:08:19 crc kubenswrapper[4580]: E0112 13:08:19.282039 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hn77p_openshift-ovn-kubernetes(fd4e0810-eddb-47f5-a7dc-beed7b545112)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" Jan 12 13:08:20 crc kubenswrapper[4580]: I0112 13:08:20.281399 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:20 crc kubenswrapper[4580]: I0112 13:08:20.281507 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:20 crc kubenswrapper[4580]: E0112 13:08:20.281626 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:20 crc kubenswrapper[4580]: E0112 13:08:20.281686 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:21 crc kubenswrapper[4580]: I0112 13:08:21.281158 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:21 crc kubenswrapper[4580]: I0112 13:08:21.281166 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:21 crc kubenswrapper[4580]: E0112 13:08:21.281897 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:21 crc kubenswrapper[4580]: E0112 13:08:21.281946 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:22 crc kubenswrapper[4580]: I0112 13:08:22.280856 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:22 crc kubenswrapper[4580]: E0112 13:08:22.280930 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:22 crc kubenswrapper[4580]: I0112 13:08:22.280869 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:22 crc kubenswrapper[4580]: E0112 13:08:22.281010 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:23 crc kubenswrapper[4580]: I0112 13:08:23.281062 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:23 crc kubenswrapper[4580]: E0112 13:08:23.281187 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:23 crc kubenswrapper[4580]: I0112 13:08:23.281250 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:23 crc kubenswrapper[4580]: E0112 13:08:23.281570 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:24 crc kubenswrapper[4580]: I0112 13:08:24.281078 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:24 crc kubenswrapper[4580]: I0112 13:08:24.281151 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:24 crc kubenswrapper[4580]: E0112 13:08:24.281198 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:24 crc kubenswrapper[4580]: E0112 13:08:24.281286 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:25 crc kubenswrapper[4580]: I0112 13:08:25.280894 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:25 crc kubenswrapper[4580]: I0112 13:08:25.280944 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:25 crc kubenswrapper[4580]: E0112 13:08:25.280989 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:25 crc kubenswrapper[4580]: E0112 13:08:25.281238 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:26 crc kubenswrapper[4580]: I0112 13:08:26.281635 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:26 crc kubenswrapper[4580]: I0112 13:08:26.281681 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:26 crc kubenswrapper[4580]: E0112 13:08:26.281728 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:26 crc kubenswrapper[4580]: E0112 13:08:26.281768 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:27 crc kubenswrapper[4580]: I0112 13:08:27.280738 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:27 crc kubenswrapper[4580]: E0112 13:08:27.280836 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:27 crc kubenswrapper[4580]: I0112 13:08:27.281021 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:27 crc kubenswrapper[4580]: E0112 13:08:27.281286 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:28 crc kubenswrapper[4580]: I0112 13:08:28.281209 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:28 crc kubenswrapper[4580]: I0112 13:08:28.281237 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:28 crc kubenswrapper[4580]: E0112 13:08:28.281316 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:28 crc kubenswrapper[4580]: E0112 13:08:28.281394 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:29 crc kubenswrapper[4580]: I0112 13:08:29.280873 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:29 crc kubenswrapper[4580]: I0112 13:08:29.280905 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:29 crc kubenswrapper[4580]: E0112 13:08:29.281016 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:29 crc kubenswrapper[4580]: E0112 13:08:29.281139 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:30 crc kubenswrapper[4580]: I0112 13:08:30.280726 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:30 crc kubenswrapper[4580]: I0112 13:08:30.280754 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:30 crc kubenswrapper[4580]: E0112 13:08:30.280820 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:30 crc kubenswrapper[4580]: E0112 13:08:30.280895 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:31 crc kubenswrapper[4580]: I0112 13:08:31.281968 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:31 crc kubenswrapper[4580]: E0112 13:08:31.282068 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:31 crc kubenswrapper[4580]: I0112 13:08:31.282262 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:31 crc kubenswrapper[4580]: E0112 13:08:31.282312 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:32 crc kubenswrapper[4580]: I0112 13:08:32.280935 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:32 crc kubenswrapper[4580]: E0112 13:08:32.281185 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:32 crc kubenswrapper[4580]: I0112 13:08:32.280948 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:32 crc kubenswrapper[4580]: E0112 13:08:32.281357 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:32 crc kubenswrapper[4580]: I0112 13:08:32.777977 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/1.log" Jan 12 13:08:32 crc kubenswrapper[4580]: I0112 13:08:32.778720 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/0.log" Jan 12 13:08:32 crc kubenswrapper[4580]: I0112 13:08:32.778779 4580 generic.go:334] "Generic (PLEG): container finished" podID="c8f39bcc-5a25-4746-988b-2251fd1be8c9" containerID="2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9" exitCode=1 Jan 12 13:08:32 crc kubenswrapper[4580]: I0112 13:08:32.778799 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nnz5s" event={"ID":"c8f39bcc-5a25-4746-988b-2251fd1be8c9","Type":"ContainerDied","Data":"2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9"} Jan 12 13:08:32 crc kubenswrapper[4580]: I0112 13:08:32.778831 4580 scope.go:117] "RemoveContainer" containerID="56aa8b2b49ab1c35203cc85f8e7cd333d538b5739be0e36db8a3fa8263c079ce" Jan 12 13:08:32 crc kubenswrapper[4580]: I0112 13:08:32.779194 4580 scope.go:117] "RemoveContainer" containerID="2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9" Jan 12 13:08:32 crc kubenswrapper[4580]: E0112 13:08:32.779525 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-nnz5s_openshift-multus(c8f39bcc-5a25-4746-988b-2251fd1be8c9)\"" pod="openshift-multus/multus-nnz5s" podUID="c8f39bcc-5a25-4746-988b-2251fd1be8c9" Jan 12 13:08:33 crc kubenswrapper[4580]: I0112 13:08:33.280614 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:33 crc kubenswrapper[4580]: I0112 13:08:33.280680 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:33 crc kubenswrapper[4580]: E0112 13:08:33.281028 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:33 crc kubenswrapper[4580]: E0112 13:08:33.281137 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:33 crc kubenswrapper[4580]: I0112 13:08:33.782000 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/1.log" Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.281497 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.281514 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:34 crc kubenswrapper[4580]: E0112 13:08:34.281763 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:34 crc kubenswrapper[4580]: E0112 13:08:34.281858 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.281903 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.786167 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/3.log" Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.788828 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerStarted","Data":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.789148 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.809628 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podStartSLOduration=95.809618725 podStartE2EDuration="1m35.809618725s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:34.808684848 +0000 UTC m=+113.852903538" watchObservedRunningTime="2026-01-12 13:08:34.809618725 +0000 UTC m=+113.853837415" Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.917244 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jw27h"] Jan 12 13:08:34 crc kubenswrapper[4580]: I0112 13:08:34.917344 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:34 crc kubenswrapper[4580]: E0112 13:08:34.917437 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:35 crc kubenswrapper[4580]: I0112 13:08:35.281318 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:35 crc kubenswrapper[4580]: E0112 13:08:35.281615 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:35 crc kubenswrapper[4580]: I0112 13:08:35.281318 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:35 crc kubenswrapper[4580]: E0112 13:08:35.281743 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:36 crc kubenswrapper[4580]: I0112 13:08:36.280770 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:36 crc kubenswrapper[4580]: E0112 13:08:36.280852 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:36 crc kubenswrapper[4580]: I0112 13:08:36.280770 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:36 crc kubenswrapper[4580]: E0112 13:08:36.281013 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:37 crc kubenswrapper[4580]: I0112 13:08:37.281331 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:37 crc kubenswrapper[4580]: E0112 13:08:37.281469 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:37 crc kubenswrapper[4580]: I0112 13:08:37.281500 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:37 crc kubenswrapper[4580]: E0112 13:08:37.281619 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:38 crc kubenswrapper[4580]: I0112 13:08:38.281527 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:38 crc kubenswrapper[4580]: I0112 13:08:38.281585 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:38 crc kubenswrapper[4580]: E0112 13:08:38.281622 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:38 crc kubenswrapper[4580]: E0112 13:08:38.281658 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:39 crc kubenswrapper[4580]: I0112 13:08:39.280917 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:39 crc kubenswrapper[4580]: I0112 13:08:39.280950 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:39 crc kubenswrapper[4580]: E0112 13:08:39.281021 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:39 crc kubenswrapper[4580]: E0112 13:08:39.281068 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:40 crc kubenswrapper[4580]: I0112 13:08:40.281570 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:40 crc kubenswrapper[4580]: E0112 13:08:40.281662 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:40 crc kubenswrapper[4580]: I0112 13:08:40.281574 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:40 crc kubenswrapper[4580]: E0112 13:08:40.281749 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:41 crc kubenswrapper[4580]: I0112 13:08:41.281076 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:41 crc kubenswrapper[4580]: I0112 13:08:41.281132 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:41 crc kubenswrapper[4580]: E0112 13:08:41.282138 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:41 crc kubenswrapper[4580]: E0112 13:08:41.282847 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:41 crc kubenswrapper[4580]: E0112 13:08:41.312294 4580 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 12 13:08:41 crc kubenswrapper[4580]: E0112 13:08:41.357048 4580 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 12 13:08:42 crc kubenswrapper[4580]: I0112 13:08:42.281401 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:42 crc kubenswrapper[4580]: I0112 13:08:42.281419 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:42 crc kubenswrapper[4580]: E0112 13:08:42.281513 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:42 crc kubenswrapper[4580]: E0112 13:08:42.281604 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:43 crc kubenswrapper[4580]: I0112 13:08:43.281192 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:43 crc kubenswrapper[4580]: E0112 13:08:43.281291 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:43 crc kubenswrapper[4580]: I0112 13:08:43.281397 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:43 crc kubenswrapper[4580]: E0112 13:08:43.281524 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:44 crc kubenswrapper[4580]: I0112 13:08:44.280919 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:44 crc kubenswrapper[4580]: I0112 13:08:44.280947 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:44 crc kubenswrapper[4580]: E0112 13:08:44.281028 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:44 crc kubenswrapper[4580]: E0112 13:08:44.281146 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:45 crc kubenswrapper[4580]: I0112 13:08:45.281480 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:45 crc kubenswrapper[4580]: I0112 13:08:45.281774 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:45 crc kubenswrapper[4580]: E0112 13:08:45.281764 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:45 crc kubenswrapper[4580]: I0112 13:08:45.281802 4580 scope.go:117] "RemoveContainer" containerID="2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9" Jan 12 13:08:45 crc kubenswrapper[4580]: E0112 13:08:45.281858 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:45 crc kubenswrapper[4580]: I0112 13:08:45.817142 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/1.log" Jan 12 13:08:45 crc kubenswrapper[4580]: I0112 13:08:45.817393 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nnz5s" event={"ID":"c8f39bcc-5a25-4746-988b-2251fd1be8c9","Type":"ContainerStarted","Data":"7e42cabcc8a0320fd9f67cb6f070b5827db98797bcde87f1d01d047fc0ed0086"} Jan 12 13:08:46 crc kubenswrapper[4580]: I0112 13:08:46.281377 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:46 crc kubenswrapper[4580]: E0112 13:08:46.281687 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:46 crc kubenswrapper[4580]: I0112 13:08:46.281408 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:46 crc kubenswrapper[4580]: E0112 13:08:46.282287 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:46 crc kubenswrapper[4580]: E0112 13:08:46.359239 4580 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 12 13:08:47 crc kubenswrapper[4580]: I0112 13:08:47.281604 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:47 crc kubenswrapper[4580]: I0112 13:08:47.281933 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:47 crc kubenswrapper[4580]: E0112 13:08:47.282081 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:47 crc kubenswrapper[4580]: E0112 13:08:47.282493 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:48 crc kubenswrapper[4580]: I0112 13:08:48.281339 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:48 crc kubenswrapper[4580]: E0112 13:08:48.281442 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:48 crc kubenswrapper[4580]: I0112 13:08:48.281634 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:48 crc kubenswrapper[4580]: E0112 13:08:48.281714 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:49 crc kubenswrapper[4580]: I0112 13:08:49.280791 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:49 crc kubenswrapper[4580]: I0112 13:08:49.280791 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:49 crc kubenswrapper[4580]: E0112 13:08:49.280920 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:49 crc kubenswrapper[4580]: E0112 13:08:49.280969 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:50 crc kubenswrapper[4580]: I0112 13:08:50.281212 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:50 crc kubenswrapper[4580]: I0112 13:08:50.281267 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:50 crc kubenswrapper[4580]: E0112 13:08:50.281331 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 12 13:08:50 crc kubenswrapper[4580]: E0112 13:08:50.281412 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jw27h" podUID="5066d8fa-2cee-4764-a817-b819d3876638" Jan 12 13:08:51 crc kubenswrapper[4580]: I0112 13:08:51.281317 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:51 crc kubenswrapper[4580]: I0112 13:08:51.281469 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:51 crc kubenswrapper[4580]: E0112 13:08:51.282396 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 12 13:08:51 crc kubenswrapper[4580]: E0112 13:08:51.282600 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 12 13:08:52 crc kubenswrapper[4580]: I0112 13:08:52.281350 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:08:52 crc kubenswrapper[4580]: I0112 13:08:52.281407 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:08:52 crc kubenswrapper[4580]: I0112 13:08:52.282954 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 12 13:08:52 crc kubenswrapper[4580]: I0112 13:08:52.283425 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 12 13:08:52 crc kubenswrapper[4580]: I0112 13:08:52.284406 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 12 13:08:52 crc kubenswrapper[4580]: I0112 13:08:52.284727 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 12 13:08:53 crc kubenswrapper[4580]: I0112 13:08:53.281381 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:08:53 crc kubenswrapper[4580]: I0112 13:08:53.281381 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:08:53 crc kubenswrapper[4580]: I0112 13:08:53.283235 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 12 13:08:53 crc kubenswrapper[4580]: I0112 13:08:53.283371 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.391185 4580 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.415491 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mw8xc"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.416380 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.416375 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.418957 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.428156 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-89mg9"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.428689 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.428758 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.429018 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.429114 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.429891 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.429911 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430036 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430078 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430141 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430202 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430257 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430304 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430343 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430228 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430526 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430470 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430473 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430757 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.430904 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.432242 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.432272 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.432489 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.432497 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.436213 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.436366 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.436530 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.436661 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.438413 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.438492 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.438645 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.438807 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.440014 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.440294 4580 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.440334 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.440394 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.440428 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.441644 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twpq4"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.441999 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.443254 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.443683 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzcxb"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.443938 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.445361 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.447809 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxkcl"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.448244 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.448719 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sbrm"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.449047 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.450563 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.451011 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.451274 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.451280 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.452033 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hlckg"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.452488 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.454342 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.454640 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.455017 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.455267 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.456435 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.457197 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.457300 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.457592 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.458525 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.458805 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.459528 4580 reflector.go:561] object-"openshift-image-registry"/"installation-pull-secrets": failed to list *v1.Secret: secrets "installation-pull-secrets" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.459643 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"installation-pull-secrets\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"installation-pull-secrets\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.459732 4580 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.459860 4580 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: secrets "console-operator-dockercfg-4xjcr" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.459895 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-operator-dockercfg-4xjcr\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.459863 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.459906 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b"] Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.459972 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460006 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460035 4580 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460069 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460085 4580 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460118 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460132 4580 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: secrets "etcd-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460148 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460199 4580 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460215 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460259 4580 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: configmaps "etcd-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460273 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460331 4580 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460342 4580 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460352 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460362 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460416 4580 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": failed to list *v1.Secret: secrets "etcd-operator-dockercfg-r9srn" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.460431 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-r9srn\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-operator-dockercfg-r9srn\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.460496 4580 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: secrets "marketplace-operator-metrics" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.459790 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: secrets "v4-0-config-system-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.459813 4580 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: configmaps "etcd-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.464833 4580 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.464925 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.464952 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.466706 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"marketplace-operator-metrics\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.466769 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.466867 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.466888 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.467565 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-cliconfig" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.467592 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-cliconfig\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.467711 4580 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.467768 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.472252 4580 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.472282 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.472357 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: secrets "v4-0-config-system-ocp-branding-template" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.472383 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-ocp-branding-template\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.472700 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: secrets "v4-0-config-user-idp-0-file-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.472718 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-idp-0-file-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.472793 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.472821 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg"] Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.475254 4580 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: configmaps "etcd-service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.475287 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486341 4580 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: secrets "oauth-openshift-dockercfg-znhcc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.486371 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-openshift-dockercfg-znhcc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486425 4580 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: configmaps "marketplace-trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.486437 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"marketplace-trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486439 4580 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: configmaps "machine-config-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486471 4580 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "kube-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-apiserver-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.486489 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.486468 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-config-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486542 4580 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: configmaps "console-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.486554 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.486542 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486573 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: secrets "v4-0-config-system-router-certs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.486586 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-router-certs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.486688 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486719 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: secrets "v4-0-config-system-session" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.486754 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-session\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.486810 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486853 4580 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "kube-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-apiserver-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.486868 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"kube-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.486914 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.486945 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487058 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487156 4580 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487167 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487178 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487213 4580 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487216 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.486815 4580 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: configmaps "audit" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487250 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487256 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487289 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487225 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487316 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487360 4580 reflector.go:561] object-"openshift-image-registry"/"image-registry-tls": failed to list *v1.Secret: secrets "image-registry-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487377 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"image-registry-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487382 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487506 4580 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": failed to list *v1.Secret: secrets "marketplace-operator-dockercfg-5nsgg" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487521 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-5nsgg\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"marketplace-operator-dockercfg-5nsgg\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487572 4580 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": failed to list *v1.Secret: secrets "machine-config-operator-dockercfg-98p87" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487583 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487585 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-98p87\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-operator-dockercfg-98p87\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487624 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487793 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487624 4580 reflector.go:561] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": failed to list *v1.Secret: secrets "registry-dockercfg-kzzsd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487900 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"registry-dockercfg-kzzsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"registry-dockercfg-kzzsd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487656 4580 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: secrets "mco-proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487924 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"mco-proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487681 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: secrets "v4-0-config-user-template-provider-selection" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487943 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-provider-selection\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487686 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: secrets "v4-0-config-user-template-error" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487962 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-error\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487717 4580 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487973 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.487979 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487722 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.488012 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487728 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487750 4580 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-apiserver-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.488090 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487768 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-service-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.488126 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-service-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487768 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487808 4580 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.488182 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.487815 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: W0112 13:08:54.487829 4580 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": failed to list *v1.Secret: secrets "kube-apiserver-operator-dockercfg-x57mr" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-apiserver-operator": no relationship found between node 'crc' and this object Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.488278 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: E0112 13:08:54.488276 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-x57mr\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"kube-apiserver-operator-dockercfg-x57mr\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.488645 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cbltx"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.489271 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.489920 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5tdwv"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.490311 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.491293 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.491906 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2hzdj"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.492016 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.492510 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hzdj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.493204 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.493329 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.493811 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.494309 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.494861 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.496147 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.496737 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.496842 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.497954 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pq2bq"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.498282 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.498466 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.498752 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.499058 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.499558 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-phs5z"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.500389 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rs6cr"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.500967 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.501190 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.501376 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.501935 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.502248 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.502626 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503210 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e19129d-499f-4d25-ad32-fd3dddb533f2-proxy-tls\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503239 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12d94033-10bf-43ea-a4de-297df750ad35-srv-cert\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503263 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503282 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhb62\" (UniqueName: \"kubernetes.io/projected/5f3179c7-0610-4e19-91cd-9a84d32ac850-kube-api-access-xhb62\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503306 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jbnkd\" (UID: \"3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503327 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-serving-cert\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503352 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mplg8\" (UniqueName: \"kubernetes.io/projected/92864954-4c11-4fae-a089-c8fc35ae755e-kube-api-access-mplg8\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503372 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503394 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503413 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqlv\" (UniqueName: \"kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503429 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/277886cf-d2d4-42e5-b2dc-253fd32648f8-audit-dir\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503449 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503466 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503483 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-config\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503502 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h42q\" (UniqueName: \"kubernetes.io/projected/3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9-kube-api-access-6h42q\") pod \"control-plane-machine-set-operator-78cbb6b69f-jbnkd\" (UID: \"3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503520 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/257c071c-ccf5-4229-b3b0-65e5b59f5edb-serving-cert\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503567 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503604 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8pvn\" (UniqueName: \"kubernetes.io/projected/0e19129d-499f-4d25-ad32-fd3dddb533f2-kube-api-access-r8pvn\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503627 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503669 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503688 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbff407-68ae-456c-b67e-40d0e47fba7b-config\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503705 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503718 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-audit\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503734 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503753 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvq8\" (UniqueName: \"kubernetes.io/projected/257c071c-ccf5-4229-b3b0-65e5b59f5edb-kube-api-access-7mvq8\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503770 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3a22133-fac4-42ba-9967-974e82a855aa-serving-cert\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503786 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdbff407-68ae-456c-b67e-40d0e47fba7b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503802 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-etcd-client\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503855 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503892 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-etcd-serving-ca\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.503908 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-serving-cert\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504001 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-image-import-ca\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504041 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e19129d-499f-4d25-ad32-fd3dddb533f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504173 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504246 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-client-ca\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504282 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-config\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504344 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws24m\" (UniqueName: \"kubernetes.io/projected/12d94033-10bf-43ea-a4de-297df750ad35-kube-api-access-ws24m\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504415 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92864954-4c11-4fae-a089-c8fc35ae755e-config\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504479 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2zrk\" (UniqueName: \"kubernetes.io/projected/993fd772-2adc-4e57-8ccd-7bcc86928a21-kube-api-access-g2zrk\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504499 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-audit-policies\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504544 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-encryption-config\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504565 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504592 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504612 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504631 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12d94033-10bf-43ea-a4de-297df750ad35-profile-collector-cert\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504649 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bdbff407-68ae-456c-b67e-40d0e47fba7b-images\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504668 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92864954-4c11-4fae-a089-c8fc35ae755e-auth-proxy-config\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.506071 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2zrh8"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.506620 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.507094 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.504683 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-config\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.507727 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-89mg9"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.507795 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.507866 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.507942 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef35e25-bd51-4dd9-8d15-7ce38326982b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.508132 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.508230 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/993fd772-2adc-4e57-8ccd-7bcc86928a21-node-pullsecrets\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.508304 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/993fd772-2adc-4e57-8ccd-7bcc86928a21-audit-dir\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.508373 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15e6f097-ed23-4797-9506-8c95af1dd7f9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.508395 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.509024 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.509324 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zkcs6"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.509757 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.509857 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp8nw\" (UniqueName: \"kubernetes.io/projected/e3a22133-fac4-42ba-9967-974e82a855aa-kube-api-access-cp8nw\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510014 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.509974 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45f8p\" (UniqueName: \"kubernetes.io/projected/bdbff407-68ae-456c-b67e-40d0e47fba7b-kube-api-access-45f8p\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510359 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510454 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9d4\" (UniqueName: \"kubernetes.io/projected/277886cf-d2d4-42e5-b2dc-253fd32648f8-kube-api-access-2h9d4\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510531 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krnp\" (UniqueName: \"kubernetes.io/projected/0b75fc88-ca92-4fb9-826b-61322c929d1b-kube-api-access-4krnp\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510620 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510690 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510756 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510819 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.510887 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.511437 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/257c071c-ccf5-4229-b3b0-65e5b59f5edb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.511513 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-serving-cert\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.511596 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdj9k\" (UniqueName: \"kubernetes.io/projected/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-kube-api-access-mdj9k\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.511664 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef35e25-bd51-4dd9-8d15-7ce38326982b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.512260 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.512375 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66fss\" (UniqueName: \"kubernetes.io/projected/fef35e25-bd51-4dd9-8d15-7ce38326982b-kube-api-access-66fss\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.512445 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-dir\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.511271 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.512642 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92864954-4c11-4fae-a089-c8fc35ae755e-machine-approver-tls\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.512686 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-encryption-config\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.512727 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-images\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.512744 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-etcd-client\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.513234 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.514209 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.515129 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mw8xc"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.515576 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzcxb"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.516855 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twpq4"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.518288 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.519141 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.519337 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.519925 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zdvz7"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.521173 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.522483 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.525502 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sbrm"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.527373 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hzdj"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.530342 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.530390 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cbltx"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.533284 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.535449 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5tdwv"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.540149 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.543760 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.545834 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxkcl"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.545867 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.546691 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.548470 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.548502 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hlckg"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.549377 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.550821 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.551870 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.552960 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.553420 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.554383 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2zrh8"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.555250 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.556404 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.557060 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pq2bq"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.557869 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.558739 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.559729 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rs6cr"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.560394 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.560677 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.561393 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.562062 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zkcs6"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.562935 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.563873 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zdvz7"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.564991 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z866m"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.565520 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c7ntw"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.565797 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z866m" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.566079 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.566344 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z866m"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.567200 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c7ntw"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.578774 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.598450 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613265 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3a22133-fac4-42ba-9967-974e82a855aa-serving-cert\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613298 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdbff407-68ae-456c-b67e-40d0e47fba7b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613321 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-etcd-client\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613341 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613366 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-etcd-serving-ca\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613397 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-serving-cert\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613412 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-image-import-ca\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613428 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e19129d-499f-4d25-ad32-fd3dddb533f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613443 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613460 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-client-ca\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613491 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-config\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613506 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-audit-policies\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613532 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws24m\" (UniqueName: \"kubernetes.io/projected/12d94033-10bf-43ea-a4de-297df750ad35-kube-api-access-ws24m\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613561 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92864954-4c11-4fae-a089-c8fc35ae755e-config\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613575 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2zrk\" (UniqueName: \"kubernetes.io/projected/993fd772-2adc-4e57-8ccd-7bcc86928a21-kube-api-access-g2zrk\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613594 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-encryption-config\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613609 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613627 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613648 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613667 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12d94033-10bf-43ea-a4de-297df750ad35-profile-collector-cert\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613686 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bdbff407-68ae-456c-b67e-40d0e47fba7b-images\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613703 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92864954-4c11-4fae-a089-c8fc35ae755e-auth-proxy-config\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613718 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-config\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613733 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613751 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/993fd772-2adc-4e57-8ccd-7bcc86928a21-audit-dir\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613766 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15e6f097-ed23-4797-9506-8c95af1dd7f9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613786 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef35e25-bd51-4dd9-8d15-7ce38326982b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613805 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613821 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/993fd772-2adc-4e57-8ccd-7bcc86928a21-node-pullsecrets\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613839 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613867 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h9d4\" (UniqueName: \"kubernetes.io/projected/277886cf-d2d4-42e5-b2dc-253fd32648f8-kube-api-access-2h9d4\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613884 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613902 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp8nw\" (UniqueName: \"kubernetes.io/projected/e3a22133-fac4-42ba-9967-974e82a855aa-kube-api-access-cp8nw\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613919 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45f8p\" (UniqueName: \"kubernetes.io/projected/bdbff407-68ae-456c-b67e-40d0e47fba7b-kube-api-access-45f8p\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613952 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613969 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krnp\" (UniqueName: \"kubernetes.io/projected/0b75fc88-ca92-4fb9-826b-61322c929d1b-kube-api-access-4krnp\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.613985 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614001 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614017 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614033 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef35e25-bd51-4dd9-8d15-7ce38326982b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614054 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614071 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/257c071c-ccf5-4229-b3b0-65e5b59f5edb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614087 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-serving-cert\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614115 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdj9k\" (UniqueName: \"kubernetes.io/projected/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-kube-api-access-mdj9k\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614132 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614147 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-dir\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614162 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92864954-4c11-4fae-a089-c8fc35ae755e-machine-approver-tls\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614179 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66fss\" (UniqueName: \"kubernetes.io/projected/fef35e25-bd51-4dd9-8d15-7ce38326982b-kube-api-access-66fss\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614198 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-images\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614212 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-etcd-client\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614229 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-encryption-config\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614255 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12d94033-10bf-43ea-a4de-297df750ad35-srv-cert\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614272 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614290 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e19129d-499f-4d25-ad32-fd3dddb533f2-proxy-tls\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614309 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhb62\" (UniqueName: \"kubernetes.io/projected/5f3179c7-0610-4e19-91cd-9a84d32ac850-kube-api-access-xhb62\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614329 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jbnkd\" (UID: \"3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614347 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-serving-cert\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614364 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614382 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mplg8\" (UniqueName: \"kubernetes.io/projected/92864954-4c11-4fae-a089-c8fc35ae755e-kube-api-access-mplg8\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614398 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614416 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614433 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqlv\" (UniqueName: \"kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614450 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/277886cf-d2d4-42e5-b2dc-253fd32648f8-audit-dir\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614468 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614486 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8pvn\" (UniqueName: \"kubernetes.io/projected/0e19129d-499f-4d25-ad32-fd3dddb533f2-kube-api-access-r8pvn\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614504 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614522 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-config\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614547 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h42q\" (UniqueName: \"kubernetes.io/projected/3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9-kube-api-access-6h42q\") pod \"control-plane-machine-set-operator-78cbb6b69f-jbnkd\" (UID: \"3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614563 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/257c071c-ccf5-4229-b3b0-65e5b59f5edb-serving-cert\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614583 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614600 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-audit\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614616 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614638 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614654 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbff407-68ae-456c-b67e-40d0e47fba7b-config\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614671 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.614690 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvq8\" (UniqueName: \"kubernetes.io/projected/257c071c-ccf5-4229-b3b0-65e5b59f5edb-kube-api-access-7mvq8\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.616456 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bdbff407-68ae-456c-b67e-40d0e47fba7b-images\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.617142 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92864954-4c11-4fae-a089-c8fc35ae755e-auth-proxy-config\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.618033 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-config\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.618163 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92864954-4c11-4fae-a089-c8fc35ae755e-config\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.618501 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.618692 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-etcd-client\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.619187 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12d94033-10bf-43ea-a4de-297df750ad35-profile-collector-cert\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.619329 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-audit-policies\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.619438 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-client-ca\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.619660 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef35e25-bd51-4dd9-8d15-7ce38326982b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.619737 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-audit\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.619663 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3a22133-fac4-42ba-9967-974e82a855aa-serving-cert\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.617164 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e19129d-499f-4d25-ad32-fd3dddb533f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.620190 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-encryption-config\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.620200 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-dir\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.620283 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/277886cf-d2d4-42e5-b2dc-253fd32648f8-audit-dir\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.620347 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.620463 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-etcd-serving-ca\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.618532 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/993fd772-2adc-4e57-8ccd-7bcc86928a21-audit-dir\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.620809 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/993fd772-2adc-4e57-8ccd-7bcc86928a21-node-pullsecrets\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.621091 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbff407-68ae-456c-b67e-40d0e47fba7b-config\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.621426 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-serving-cert\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.621429 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-config\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.621558 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-encryption-config\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.621597 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.621898 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/257c071c-ccf5-4229-b3b0-65e5b59f5edb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.622214 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277886cf-d2d4-42e5-b2dc-253fd32648f8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.622364 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/993fd772-2adc-4e57-8ccd-7bcc86928a21-image-import-ca\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.623194 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12d94033-10bf-43ea-a4de-297df750ad35-srv-cert\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.623326 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277886cf-d2d4-42e5-b2dc-253fd32648f8-etcd-client\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.623579 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef35e25-bd51-4dd9-8d15-7ce38326982b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.623707 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdbff407-68ae-456c-b67e-40d0e47fba7b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.623973 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jbnkd\" (UID: \"3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.624477 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92864954-4c11-4fae-a089-c8fc35ae755e-machine-approver-tls\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.624530 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e19129d-499f-4d25-ad32-fd3dddb533f2-proxy-tls\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.625015 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/993fd772-2adc-4e57-8ccd-7bcc86928a21-serving-cert\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.625979 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/257c071c-ccf5-4229-b3b0-65e5b59f5edb-serving-cert\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.639509 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.642979 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rpc5j"] Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.643718 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.658760 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.678313 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.698689 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.719558 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.739429 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.758451 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.778415 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.798380 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.819046 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.845457 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.858787 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.878849 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.898613 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.918639 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.939074 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.958726 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 12 13:08:54 crc kubenswrapper[4580]: I0112 13:08:54.978864 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.002782 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.018571 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.038767 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.058353 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.078760 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.098220 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.118663 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.138778 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.159226 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.178726 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.198634 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.218692 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.238662 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.258951 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.283478 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.298343 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.318621 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.338581 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.359030 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.378500 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.403415 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.418640 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.438760 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.458928 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.479669 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.498818 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.517186 4580 request.go:700] Waited for 1.018420038s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.518010 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.538192 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.558503 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.578400 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.598925 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.615333 4580 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.615479 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config podName:0b75fc88-ca92-4fb9-826b-61322c929d1b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.115460621 +0000 UTC m=+135.159679311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config") pod "console-operator-58897d9998-twpq4" (UID: "0b75fc88-ca92-4fb9-826b-61322c929d1b") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.615351 4580 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.615649 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-ca podName:fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.115640018 +0000 UTC m=+135.159858707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-ca") pod "etcd-operator-b45778765-nzcxb" (UID: "fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.615371 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.615789 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.115778517 +0000 UTC m=+135.159997206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.615681 4580 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.615950 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-images podName:5f3179c7-0610-4e19-91cd-9a84d32ac850 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.115942003 +0000 UTC m=+135.160160693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-images") pod "machine-config-operator-74547568cd-xntjp" (UID: "5f3179c7-0610-4e19-91cd-9a84d32ac850") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.618616 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619403 4580 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619529 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert podName:0b75fc88-ca92-4fb9-826b-61322c929d1b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.11951843 +0000 UTC m=+135.163737120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert") pod "console-operator-58897d9998-twpq4" (UID: "0b75fc88-ca92-4fb9-826b-61322c929d1b") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619586 4580 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619605 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619629 4580 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619693 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca podName:fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.119682757 +0000 UTC m=+135.163901447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca") pod "etcd-operator-b45778765-nzcxb" (UID: "fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619774 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.119755954 +0000 UTC m=+135.163974644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619793 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-config podName:fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.11978592 +0000 UTC m=+135.164004611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-config") pod "etcd-operator-b45778765-nzcxb" (UID: "fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619818 4580 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619830 4580 configmap.go:193] Couldn't get configMap openshift-authentication/audit: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619843 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.119836886 +0000 UTC m=+135.164055577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.619873 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.119861773 +0000 UTC m=+135.164080463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620037 4580 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620078 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-serving-cert podName:fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120070443 +0000 UTC m=+135.164289133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-serving-cert") pod "etcd-operator-b45778765-nzcxb" (UID: "fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620518 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620550 4580 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620554 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620582 4580 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620562 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120554269 +0000 UTC m=+135.164772960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620511 4580 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620608 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert podName:15e6f097-ed23-4797-9506-8c95af1dd7f9 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120600466 +0000 UTC m=+135.164819156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert") pod "kube-apiserver-operator-766d6c64bb-262n7" (UID: "15e6f097-ed23-4797-9506-8c95af1dd7f9") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620623 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client podName:fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120617308 +0000 UTC m=+135.164835998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client") pod "etcd-operator-b45778765-nzcxb" (UID: "fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620636 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120630632 +0000 UTC m=+135.164849322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620646 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120641704 +0000 UTC m=+135.164860393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620826 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620858 4580 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620877 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120862176 +0000 UTC m=+135.165080866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620894 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca podName:0b75fc88-ca92-4fb9-826b-61322c929d1b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120886882 +0000 UTC m=+135.165105572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca") pod "console-operator-58897d9998-twpq4" (UID: "0b75fc88-ca92-4fb9-826b-61322c929d1b") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620913 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620924 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620943 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120935964 +0000 UTC m=+135.165154654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.620961 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.120953116 +0000 UTC m=+135.165171806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.622042 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-session: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.622084 4580 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.622120 4580 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.622089 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.122080467 +0000 UTC m=+135.166299147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.622155 4580 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.622174 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls podName:5f3179c7-0610-4e19-91cd-9a84d32ac850 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.122161509 +0000 UTC m=+135.166380199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls") pod "machine-config-operator-74547568cd-xntjp" (UID: "5f3179c7-0610-4e19-91cd-9a84d32ac850") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.622192 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config podName:15e6f097-ed23-4797-9506-8c95af1dd7f9 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.122184522 +0000 UTC m=+135.166403213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config") pod "kube-apiserver-operator-766d6c64bb-262n7" (UID: "15e6f097-ed23-4797-9506-8c95af1dd7f9") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: E0112 13:08:55.622210 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.122202506 +0000 UTC m=+135.166421196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.638368 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.658726 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.678759 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.698349 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.718566 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.738676 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.758943 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.778618 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.798344 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.818407 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.838668 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.858283 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.878649 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.898872 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.919016 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.938356 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.978774 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 12 13:08:55 crc kubenswrapper[4580]: I0112 13:08:55.998592 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.018888 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028437 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84lhw\" (UniqueName: \"kubernetes.io/projected/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-kube-api-access-84lhw\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028482 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-bound-sa-token\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028505 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf7dd\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-kube-api-access-hf7dd\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028529 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-certificates\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028601 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028652 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd2ced26-b320-44a3-aa98-457376b3d8c8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028716 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd2ced26-b320-44a3-aa98-457376b3d8c8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028738 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-trusted-ca\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028768 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.028887 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.52887688 +0000 UTC m=+135.573095570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028913 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.028994 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.038339 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.058411 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.078807 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.099015 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.118185 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130131 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130224 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130248 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.130256 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.630244244 +0000 UTC m=+135.674462935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130284 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130314 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df323b67-b600-430b-8712-a278bb06d806-node-bootstrap-token\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130336 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b70f0bc6-36e7-4a25-854b-4ca6364e6aa0-cert\") pod \"ingress-canary-z866m\" (UID: \"b70f0bc6-36e7-4a25-854b-4ca6364e6aa0\") " pod="openshift-ingress-canary/ingress-canary-z866m" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130368 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-service-ca-bundle\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130390 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84lhw\" (UniqueName: \"kubernetes.io/projected/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-kube-api-access-84lhw\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130440 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a403ac95-5e4f-4234-9c0c-daf0b3831850-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130463 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52498b90-7457-4a64-9993-4f58794eecc0-webhook-cert\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130482 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-config\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130509 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzglc\" (UniqueName: \"kubernetes.io/projected/99294305-b2d1-431b-916d-46f9a599b523-kube-api-access-xzglc\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130556 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v47pj\" (UniqueName: \"kubernetes.io/projected/df323b67-b600-430b-8712-a278bb06d806-kube-api-access-v47pj\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130591 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82a887ec-4d3a-4533-aa32-ee1eab68aa86-metrics-tls\") pod \"dns-operator-744455d44c-rs6cr\" (UID: \"82a887ec-4d3a-4533-aa32-ee1eab68aa86\") " pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130637 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130664 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130758 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29rc\" (UniqueName: \"kubernetes.io/projected/52498b90-7457-4a64-9993-4f58794eecc0-kube-api-access-k29rc\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130790 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f5410400-7426-4922-8f12-79e9bb359b58-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130810 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h47ds\" (UniqueName: \"kubernetes.io/projected/7f8d60b6-387f-4e18-9332-60acade1e93c-kube-api-access-h47ds\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130830 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-plugins-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130853 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.130958 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131022 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-service-ca\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131048 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d75f42a-a600-4c36-9da8-1f91f80336bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131070 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131095 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131136 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd2ced26-b320-44a3-aa98-457376b3d8c8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131154 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-csi-data-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.131157 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.631150301 +0000 UTC m=+135.675368992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131178 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131195 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd44m\" (UniqueName: \"kubernetes.io/projected/46cfce88-b8c3-48f9-a957-c6eb80c59166-kube-api-access-dd44m\") pod \"package-server-manager-789f6589d5-s8vg5\" (UID: \"46cfce88-b8c3-48f9-a957-c6eb80c59166\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131247 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-service-ca-bundle\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131278 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df323b67-b600-430b-8712-a278bb06d806-certs\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131436 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-oauth-serving-cert\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131479 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-config\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131502 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2zrh8\" (UID: \"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131530 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd2ced26-b320-44a3-aa98-457376b3d8c8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131559 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a403ac95-5e4f-4234-9c0c-daf0b3831850-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131581 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52498b90-7457-4a64-9993-4f58794eecc0-apiservice-cert\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131602 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-trusted-ca\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131639 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a641a8b-c632-45bd-8606-e3fa10d531b8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131671 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25kzq\" (UniqueName: \"kubernetes.io/projected/9a641a8b-c632-45bd-8606-e3fa10d531b8-kube-api-access-25kzq\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131698 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c37e67-6b89-4830-8723-f6716badcaa4-serving-cert\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131726 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131754 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-config\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131775 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-oauth-config\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131809 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2tb\" (UniqueName: \"kubernetes.io/projected/2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1-kube-api-access-mh2tb\") pod \"downloads-7954f5f757-2hzdj\" (UID: \"2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1\") " pod="openshift-console/downloads-7954f5f757-2hzdj" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131828 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzrqv\" (UniqueName: \"kubernetes.io/projected/a5c054e2-c14d-43cb-a432-ad8e9022b010-kube-api-access-nzrqv\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131858 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3417afd-f5ef-4c91-990f-22c8a77f2713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131879 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131901 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd2ced26-b320-44a3-aa98-457376b3d8c8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131902 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131951 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131969 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wcgm\" (UniqueName: \"kubernetes.io/projected/82a887ec-4d3a-4533-aa32-ee1eab68aa86-kube-api-access-4wcgm\") pod \"dns-operator-744455d44c-rs6cr\" (UID: \"82a887ec-4d3a-4533-aa32-ee1eab68aa86\") " pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.131984 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbg6s\" (UniqueName: \"kubernetes.io/projected/f5410400-7426-4922-8f12-79e9bb359b58-kube-api-access-cbg6s\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132000 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4cl\" (UniqueName: \"kubernetes.io/projected/405ad898-9997-4efd-b8a8-f878c39784b5-kube-api-access-cv4cl\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132021 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132048 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-928j4\" (UniqueName: \"kubernetes.io/projected/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-kube-api-access-928j4\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132064 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-stats-auth\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132090 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132141 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a5c054e2-c14d-43cb-a432-ad8e9022b010-signing-key\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132164 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132202 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132277 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037a95c2-1119-4fd8-8499-682fba2f03ea-secret-volume\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132300 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a5c054e2-c14d-43cb-a432-ad8e9022b010-signing-cabundle\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132326 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132345 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a641a8b-c632-45bd-8606-e3fa10d531b8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132362 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132380 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-serving-cert\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132424 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f8d60b6-387f-4e18-9332-60acade1e93c-config-volume\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132460 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvlsd\" (UniqueName: \"kubernetes.io/projected/e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1-kube-api-access-xvlsd\") pod \"multus-admission-controller-857f4d67dd-2zrh8\" (UID: \"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132498 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/52498b90-7457-4a64-9993-4f58794eecc0-tmpfs\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132528 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-images\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132642 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dafdf187-36fd-4d32-b188-07a5cd4474a9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jzjkt\" (UID: \"dafdf187-36fd-4d32-b188-07a5cd4474a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132675 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-registration-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132691 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132707 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-bound-sa-token\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132722 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf7dd\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-kube-api-access-hf7dd\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132742 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d75f42a-a600-4c36-9da8-1f91f80336bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132845 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-mountpoint-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132877 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-certificates\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132898 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3417afd-f5ef-4c91-990f-22c8a77f2713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132921 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.132948 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5glbm\" (UniqueName: \"kubernetes.io/projected/78d6fc59-606f-4a88-b7be-467d9c41160d-kube-api-access-5glbm\") pod \"migrator-59844c95c7-ch5j5\" (UID: \"78d6fc59-606f-4a88-b7be-467d9c41160d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133015 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133059 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-default-certificate\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133084 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133135 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-config\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133203 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzmtz\" (UniqueName: \"kubernetes.io/projected/4c9595e5-9b32-4af8-b872-cf027b10a334-kube-api-access-kzmtz\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133227 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cfce88-b8c3-48f9-a957-c6eb80c59166-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s8vg5\" (UID: \"46cfce88-b8c3-48f9-a957-c6eb80c59166\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133287 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24qsq\" (UniqueName: \"kubernetes.io/projected/037a95c2-1119-4fd8-8499-682fba2f03ea-kube-api-access-24qsq\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133315 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxm9\" (UniqueName: \"kubernetes.io/projected/73c37e67-6b89-4830-8723-f6716badcaa4-kube-api-access-gxxm9\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133375 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133399 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99294305-b2d1-431b-916d-46f9a599b523-config\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133491 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037a95c2-1119-4fd8-8499-682fba2f03ea-config-volume\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133519 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvh6\" (UniqueName: \"kubernetes.io/projected/1d75f42a-a600-4c36-9da8-1f91f80336bc-kube-api-access-tcvh6\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133551 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133619 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9qs\" (UniqueName: \"kubernetes.io/projected/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-kube-api-access-tg9qs\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133689 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfpnt\" (UniqueName: \"kubernetes.io/projected/b70f0bc6-36e7-4a25-854b-4ca6364e6aa0-kube-api-access-kfpnt\") pod \"ingress-canary-z866m\" (UID: \"b70f0bc6-36e7-4a25-854b-4ca6364e6aa0\") " pod="openshift-ingress-canary/ingress-canary-z866m" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133730 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-certificates\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133777 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133805 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962rs\" (UniqueName: \"kubernetes.io/projected/a403ac95-5e4f-4234-9c0c-daf0b3831850-kube-api-access-962rs\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133825 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-client-ca\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133842 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbflp\" (UniqueName: \"kubernetes.io/projected/90150eba-9b4f-485f-97c3-89d410cb5851-kube-api-access-xbflp\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133864 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-socket-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133882 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3417afd-f5ef-4c91-990f-22c8a77f2713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133898 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133915 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-console-config\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133933 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133950 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dp9p\" (UniqueName: \"kubernetes.io/projected/dafdf187-36fd-4d32-b188-07a5cd4474a9-kube-api-access-7dp9p\") pod \"cluster-samples-operator-665b6dd947-jzjkt\" (UID: \"dafdf187-36fd-4d32-b188-07a5cd4474a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.133973 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f5410400-7426-4922-8f12-79e9bb359b58-srv-cert\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134024 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-metrics-certs\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134049 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a403ac95-5e4f-4234-9c0c-daf0b3831850-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134080 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d75f42a-a600-4c36-9da8-1f91f80336bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134123 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c9595e5-9b32-4af8-b872-cf027b10a334-serving-cert\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134171 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-trusted-ca-bundle\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134267 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f8d60b6-387f-4e18-9332-60acade1e93c-metrics-tls\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134327 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-serving-cert\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134352 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99294305-b2d1-431b-916d-46f9a599b523-serving-cert\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134390 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.134414 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.139208 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.158458 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.178758 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.198113 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.218669 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235308 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.235372 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.735359577 +0000 UTC m=+135.779578267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235486 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-trusted-ca-bundle\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235513 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f8d60b6-387f-4e18-9332-60acade1e93c-metrics-tls\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235556 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-serving-cert\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235577 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99294305-b2d1-431b-916d-46f9a599b523-serving-cert\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235605 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235653 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df323b67-b600-430b-8712-a278bb06d806-node-bootstrap-token\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235669 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b70f0bc6-36e7-4a25-854b-4ca6364e6aa0-cert\") pod \"ingress-canary-z866m\" (UID: \"b70f0bc6-36e7-4a25-854b-4ca6364e6aa0\") " pod="openshift-ingress-canary/ingress-canary-z866m" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235697 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-service-ca-bundle\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235721 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a403ac95-5e4f-4234-9c0c-daf0b3831850-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235735 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52498b90-7457-4a64-9993-4f58794eecc0-webhook-cert\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235751 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-config\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235775 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzglc\" (UniqueName: \"kubernetes.io/projected/99294305-b2d1-431b-916d-46f9a599b523-kube-api-access-xzglc\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235790 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v47pj\" (UniqueName: \"kubernetes.io/projected/df323b67-b600-430b-8712-a278bb06d806-kube-api-access-v47pj\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235808 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82a887ec-4d3a-4533-aa32-ee1eab68aa86-metrics-tls\") pod \"dns-operator-744455d44c-rs6cr\" (UID: \"82a887ec-4d3a-4533-aa32-ee1eab68aa86\") " pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235840 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k29rc\" (UniqueName: \"kubernetes.io/projected/52498b90-7457-4a64-9993-4f58794eecc0-kube-api-access-k29rc\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235857 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f5410400-7426-4922-8f12-79e9bb359b58-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235873 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h47ds\" (UniqueName: \"kubernetes.io/projected/7f8d60b6-387f-4e18-9332-60acade1e93c-kube-api-access-h47ds\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235890 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-plugins-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235917 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235950 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.235969 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-service-ca\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236007 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d75f42a-a600-4c36-9da8-1f91f80336bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236043 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-csi-data-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236069 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236097 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd44m\" (UniqueName: \"kubernetes.io/projected/46cfce88-b8c3-48f9-a957-c6eb80c59166-kube-api-access-dd44m\") pod \"package-server-manager-789f6589d5-s8vg5\" (UID: \"46cfce88-b8c3-48f9-a957-c6eb80c59166\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236147 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-service-ca-bundle\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236164 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df323b67-b600-430b-8712-a278bb06d806-certs\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236193 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-oauth-serving-cert\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236213 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2zrh8\" (UID: \"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236234 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-config\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236255 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a403ac95-5e4f-4234-9c0c-daf0b3831850-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236274 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52498b90-7457-4a64-9993-4f58794eecc0-apiservice-cert\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236282 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-plugins-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236458 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-service-ca-bundle\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.237043 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-trusted-ca-bundle\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.237221 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-config\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.237230 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-oauth-serving-cert\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.237223 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-service-ca-bundle\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.237272 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-config\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.237331 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-csi-data-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.237394 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.737386032 +0000 UTC m=+135.781604722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.237967 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-service-ca\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.236309 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25kzq\" (UniqueName: \"kubernetes.io/projected/9a641a8b-c632-45bd-8606-e3fa10d531b8-kube-api-access-25kzq\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238124 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c37e67-6b89-4830-8723-f6716badcaa4-serving-cert\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238150 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a641a8b-c632-45bd-8606-e3fa10d531b8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238179 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-oauth-config\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238219 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2tb\" (UniqueName: \"kubernetes.io/projected/2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1-kube-api-access-mh2tb\") pod \"downloads-7954f5f757-2hzdj\" (UID: \"2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1\") " pod="openshift-console/downloads-7954f5f757-2hzdj" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238238 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzrqv\" (UniqueName: \"kubernetes.io/projected/a5c054e2-c14d-43cb-a432-ad8e9022b010-kube-api-access-nzrqv\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238274 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3417afd-f5ef-4c91-990f-22c8a77f2713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238306 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbg6s\" (UniqueName: \"kubernetes.io/projected/f5410400-7426-4922-8f12-79e9bb359b58-kube-api-access-cbg6s\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238324 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4cl\" (UniqueName: \"kubernetes.io/projected/405ad898-9997-4efd-b8a8-f878c39784b5-kube-api-access-cv4cl\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238342 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wcgm\" (UniqueName: \"kubernetes.io/projected/82a887ec-4d3a-4533-aa32-ee1eab68aa86-kube-api-access-4wcgm\") pod \"dns-operator-744455d44c-rs6cr\" (UID: \"82a887ec-4d3a-4533-aa32-ee1eab68aa86\") " pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238364 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-928j4\" (UniqueName: \"kubernetes.io/projected/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-kube-api-access-928j4\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238383 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-stats-auth\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238414 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a5c054e2-c14d-43cb-a432-ad8e9022b010-signing-key\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238452 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037a95c2-1119-4fd8-8499-682fba2f03ea-secret-volume\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238468 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a5c054e2-c14d-43cb-a432-ad8e9022b010-signing-cabundle\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238486 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a641a8b-c632-45bd-8606-e3fa10d531b8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238517 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f8d60b6-387f-4e18-9332-60acade1e93c-config-volume\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238536 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvlsd\" (UniqueName: \"kubernetes.io/projected/e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1-kube-api-access-xvlsd\") pod \"multus-admission-controller-857f4d67dd-2zrh8\" (UID: \"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238580 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/52498b90-7457-4a64-9993-4f58794eecc0-tmpfs\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238603 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dafdf187-36fd-4d32-b188-07a5cd4474a9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jzjkt\" (UID: \"dafdf187-36fd-4d32-b188-07a5cd4474a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238620 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-registration-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238662 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d75f42a-a600-4c36-9da8-1f91f80336bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238680 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-mountpoint-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238700 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3417afd-f5ef-4c91-990f-22c8a77f2713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238724 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5glbm\" (UniqueName: \"kubernetes.io/projected/78d6fc59-606f-4a88-b7be-467d9c41160d-kube-api-access-5glbm\") pod \"migrator-59844c95c7-ch5j5\" (UID: \"78d6fc59-606f-4a88-b7be-467d9c41160d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238753 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-default-certificate\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238780 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-config\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238801 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzmtz\" (UniqueName: \"kubernetes.io/projected/4c9595e5-9b32-4af8-b872-cf027b10a334-kube-api-access-kzmtz\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238821 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cfce88-b8c3-48f9-a957-c6eb80c59166-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s8vg5\" (UID: \"46cfce88-b8c3-48f9-a957-c6eb80c59166\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238841 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24qsq\" (UniqueName: \"kubernetes.io/projected/037a95c2-1119-4fd8-8499-682fba2f03ea-kube-api-access-24qsq\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238862 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxm9\" (UniqueName: \"kubernetes.io/projected/73c37e67-6b89-4830-8723-f6716badcaa4-kube-api-access-gxxm9\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238883 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99294305-b2d1-431b-916d-46f9a599b523-config\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238902 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037a95c2-1119-4fd8-8499-682fba2f03ea-config-volume\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238920 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvh6\" (UniqueName: \"kubernetes.io/projected/1d75f42a-a600-4c36-9da8-1f91f80336bc-kube-api-access-tcvh6\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238945 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9qs\" (UniqueName: \"kubernetes.io/projected/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-kube-api-access-tg9qs\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238967 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfpnt\" (UniqueName: \"kubernetes.io/projected/b70f0bc6-36e7-4a25-854b-4ca6364e6aa0-kube-api-access-kfpnt\") pod \"ingress-canary-z866m\" (UID: \"b70f0bc6-36e7-4a25-854b-4ca6364e6aa0\") " pod="openshift-ingress-canary/ingress-canary-z866m" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238995 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239024 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-client-ca\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239043 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-962rs\" (UniqueName: \"kubernetes.io/projected/a403ac95-5e4f-4234-9c0c-daf0b3831850-kube-api-access-962rs\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239060 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbflp\" (UniqueName: \"kubernetes.io/projected/90150eba-9b4f-485f-97c3-89d410cb5851-kube-api-access-xbflp\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239079 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-socket-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239094 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3417afd-f5ef-4c91-990f-22c8a77f2713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239158 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239177 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-console-config\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239198 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239220 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dp9p\" (UniqueName: \"kubernetes.io/projected/dafdf187-36fd-4d32-b188-07a5cd4474a9-kube-api-access-7dp9p\") pod \"cluster-samples-operator-665b6dd947-jzjkt\" (UID: \"dafdf187-36fd-4d32-b188-07a5cd4474a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239238 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f5410400-7426-4922-8f12-79e9bb359b58-srv-cert\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239241 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-serving-cert\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239274 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-metrics-certs\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239294 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a403ac95-5e4f-4234-9c0c-daf0b3831850-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239313 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d75f42a-a600-4c36-9da8-1f91f80336bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239332 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c9595e5-9b32-4af8-b872-cf027b10a334-serving-cert\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239477 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-socket-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239987 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3417afd-f5ef-4c91-990f-22c8a77f2713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.239094 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52498b90-7457-4a64-9993-4f58794eecc0-webhook-cert\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.240271 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-config\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.240510 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a403ac95-5e4f-4234-9c0c-daf0b3831850-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.241372 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99294305-b2d1-431b-916d-46f9a599b523-config\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.240580 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/82a887ec-4d3a-4533-aa32-ee1eab68aa86-metrics-tls\") pod \"dns-operator-744455d44c-rs6cr\" (UID: \"82a887ec-4d3a-4533-aa32-ee1eab68aa86\") " pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.240724 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.241468 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52498b90-7457-4a64-9993-4f58794eecc0-apiservice-cert\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.240993 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99294305-b2d1-431b-916d-46f9a599b523-serving-cert\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.241168 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.241507 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/52498b90-7457-4a64-9993-4f58794eecc0-tmpfs\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.240988 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a5c054e2-c14d-43cb-a432-ad8e9022b010-signing-cabundle\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.240512 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f5410400-7426-4922-8f12-79e9bb359b58-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.241625 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037a95c2-1119-4fd8-8499-682fba2f03ea-config-volume\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.241918 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-console-config\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.242474 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c9595e5-9b32-4af8-b872-cf027b10a334-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.242613 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.242597 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2zrh8\" (UID: \"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.242781 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-registration-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.243033 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-client-ca\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.243091 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.238752 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d75f42a-a600-4c36-9da8-1f91f80336bc-trusted-ca\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.243414 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.243436 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037a95c2-1119-4fd8-8499-682fba2f03ea-secret-volume\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.243929 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a641a8b-c632-45bd-8606-e3fa10d531b8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.244554 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dafdf187-36fd-4d32-b188-07a5cd4474a9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jzjkt\" (UID: \"dafdf187-36fd-4d32-b188-07a5cd4474a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.244605 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d75f42a-a600-4c36-9da8-1f91f80336bc-metrics-tls\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.244816 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/405ad898-9997-4efd-b8a8-f878c39784b5-mountpoint-dir\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.244959 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/46cfce88-b8c3-48f9-a957-c6eb80c59166-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s8vg5\" (UID: \"46cfce88-b8c3-48f9-a957-c6eb80c59166\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.245086 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c9595e5-9b32-4af8-b872-cf027b10a334-serving-cert\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.245094 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3417afd-f5ef-4c91-990f-22c8a77f2713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.245418 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a641a8b-c632-45bd-8606-e3fa10d531b8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.245829 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a5c054e2-c14d-43cb-a432-ad8e9022b010-signing-key\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.246189 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f5410400-7426-4922-8f12-79e9bb359b58-srv-cert\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.246642 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-stats-auth\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.246761 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-default-certificate\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.246834 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-metrics-certs\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.247211 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c37e67-6b89-4830-8723-f6716badcaa4-serving-cert\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.247935 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-oauth-config\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.258552 4580 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.278647 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.298516 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.318157 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.338759 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.339658 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.339777 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.839762106 +0000 UTC m=+135.883980796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.339962 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.340175 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.840167735 +0000 UTC m=+135.884386426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.349433 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b70f0bc6-36e7-4a25-854b-4ca6364e6aa0-cert\") pod \"ingress-canary-z866m\" (UID: \"b70f0bc6-36e7-4a25-854b-4ca6364e6aa0\") " pod="openshift-ingress-canary/ingress-canary-z866m" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.358830 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.380904 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.391434 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f8d60b6-387f-4e18-9332-60acade1e93c-metrics-tls\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.399046 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.418956 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.422621 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f8d60b6-387f-4e18-9332-60acade1e93c-config-volume\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.441721 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.441901 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.941884254 +0000 UTC m=+135.986102944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.442510 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.442917 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:56.942903812 +0000 UTC m=+135.987122502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.470761 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66fss\" (UniqueName: \"kubernetes.io/projected/fef35e25-bd51-4dd9-8d15-7ce38326982b-kube-api-access-66fss\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr76c\" (UID: \"fef35e25-bd51-4dd9-8d15-7ce38326982b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.509606 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws24m\" (UniqueName: \"kubernetes.io/projected/12d94033-10bf-43ea-a4de-297df750ad35-kube-api-access-ws24m\") pod \"catalog-operator-68c6474976-mn56v\" (UID: \"12d94033-10bf-43ea-a4de-297df750ad35\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.517899 4580 request.go:700] Waited for 1.899974454s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.530684 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2zrk\" (UniqueName: \"kubernetes.io/projected/993fd772-2adc-4e57-8ccd-7bcc86928a21-kube-api-access-g2zrk\") pod \"apiserver-76f77b778f-mw8xc\" (UID: \"993fd772-2adc-4e57-8ccd-7bcc86928a21\") " pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.536915 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.543250 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.543325 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.043310648 +0000 UTC m=+136.087529338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.543616 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.543898 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.043889853 +0000 UTC m=+136.088108543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.633885 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mplg8\" (UniqueName: \"kubernetes.io/projected/92864954-4c11-4fae-a089-c8fc35ae755e-kube-api-access-mplg8\") pod \"machine-approver-56656f9798-gz9sn\" (UID: \"92864954-4c11-4fae-a089-c8fc35ae755e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.644598 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.644695 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.144672783 +0000 UTC m=+136.188891463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.645064 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.645400 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.145390016 +0000 UTC m=+136.189608707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.652745 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8pvn\" (UniqueName: \"kubernetes.io/projected/0e19129d-499f-4d25-ad32-fd3dddb533f2-kube-api-access-r8pvn\") pod \"machine-config-controller-84d6567774-lfcct\" (UID: \"0e19129d-499f-4d25-ad32-fd3dddb533f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.659661 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.670312 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h42q\" (UniqueName: \"kubernetes.io/projected/3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9-kube-api-access-6h42q\") pod \"control-plane-machine-set-operator-78cbb6b69f-jbnkd\" (UID: \"3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.672741 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.678428 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.687369 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.695632 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mw8xc"] Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.697524 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h9d4\" (UniqueName: \"kubernetes.io/projected/277886cf-d2d4-42e5-b2dc-253fd32648f8-kube-api-access-2h9d4\") pod \"apiserver-7bbb656c7d-pk8kj\" (UID: \"277886cf-d2d4-42e5-b2dc-253fd32648f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.712860 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp8nw\" (UniqueName: \"kubernetes.io/projected/e3a22133-fac4-42ba-9967-974e82a855aa-kube-api-access-cp8nw\") pod \"route-controller-manager-6576b87f9c-z6r47\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.732753 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45f8p\" (UniqueName: \"kubernetes.io/projected/bdbff407-68ae-456c-b67e-40d0e47fba7b-kube-api-access-45f8p\") pod \"machine-api-operator-5694c8668f-89mg9\" (UID: \"bdbff407-68ae-456c-b67e-40d0e47fba7b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.748617 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.749461 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.249445725 +0000 UTC m=+136.293664416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.750526 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhb62\" (UniqueName: \"kubernetes.io/projected/5f3179c7-0610-4e19-91cd-9a84d32ac850-kube-api-access-xhb62\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.760043 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.779544 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.789746 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/df323b67-b600-430b-8712-a278bb06d806-certs\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.799711 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.807284 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct"] Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.809040 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/df323b67-b600-430b-8712-a278bb06d806-node-bootstrap-token\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.823630 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.827513 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a403ac95-5e4f-4234-9c0c-daf0b3831850-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.833193 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-trusted-ca\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.839531 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.841569 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.845917 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-serving-cert\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.848424 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.851427 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.851933 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.351911847 +0000 UTC m=+136.396130537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.856066 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.860491 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" event={"ID":"993fd772-2adc-4e57-8ccd-7bcc86928a21","Type":"ContainerStarted","Data":"bce1551fc5debedc5f0ed613b4d804430ee95d24f9d3e8ef629c1dd5a1b67e6e"} Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.860705 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.863171 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" event={"ID":"0e19129d-499f-4d25-ad32-fd3dddb533f2","Type":"ContainerStarted","Data":"5ff06213e0f92a606d36d2f921198c3ee7493596f7b111156a69969ed5a0c383"} Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.863501 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5f3179c7-0610-4e19-91cd-9a84d32ac850-images\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.883753 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.899787 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.905612 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.907890 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v"] Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.919240 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.926526 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.938929 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.945839 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.952600 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.953190 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.453168694 +0000 UTC m=+136.497387385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.953282 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:56 crc kubenswrapper[4580]: E0112 13:08:56.953677 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.453660455 +0000 UTC m=+136.497879146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.962214 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.966848 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.979014 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 12 13:08:56 crc kubenswrapper[4580]: I0112 13:08:56.999063 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.019985 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.021761 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj"] Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.023436 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-config\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.048365 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.053374 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.054728 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.055551 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.555523998 +0000 UTC m=+136.599742689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.059177 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd"] Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.060069 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c"] Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.061912 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.068318 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-89mg9"] Jan 12 13:08:57 crc kubenswrapper[4580]: W0112 13:08:57.069539 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfef35e25_bd51_4dd9_8d15_7ce38326982b.slice/crio-df29be8b0cf210f1f5aeb2ccb4d07bfdfd951bd5ab51e7a7a2376b5bb1a0216b WatchSource:0}: Error finding container df29be8b0cf210f1f5aeb2ccb4d07bfdfd951bd5ab51e7a7a2376b5bb1a0216b: Status 404 returned error can't find the container with id df29be8b0cf210f1f5aeb2ccb4d07bfdfd951bd5ab51e7a7a2376b5bb1a0216b Jan 12 13:08:57 crc kubenswrapper[4580]: W0112 13:08:57.071178 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cd49599_ac6f_4d9f_9d86_2f6ff90ddbf9.slice/crio-f50ea97225aa3fd210ca1e5ef4e9fcc6530fb6816b048baff4cf1dd601a3156f WatchSource:0}: Error finding container f50ea97225aa3fd210ca1e5ef4e9fcc6530fb6816b048baff4cf1dd601a3156f: Status 404 returned error can't find the container with id f50ea97225aa3fd210ca1e5ef4e9fcc6530fb6816b048baff4cf1dd601a3156f Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.081721 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.083547 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd2ced26-b320-44a3-aa98-457376b3d8c8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.098674 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.102470 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.118313 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.123749 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.130810 4580 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.130867 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.130851483 +0000 UTC m=+137.175070173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.130944 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.130951 4580 configmap.go:193] Couldn't get configMap openshift-authentication/audit: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.131001 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.130984011 +0000 UTC m=+137.175202702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.131027 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.131011172 +0000 UTC m=+137.175229862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.131056 4580 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.131094 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config podName:15e6f097-ed23-4797-9506-8c95af1dd7f9 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.131087284 +0000 UTC m=+137.175305974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config") pod "kube-apiserver-operator-766d6c64bb-262n7" (UID: "15e6f097-ed23-4797-9506-8c95af1dd7f9") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.131355 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47"] Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.131412 4580 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.131454 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert podName:0b75fc88-ca92-4fb9-826b-61322c929d1b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.131445165 +0000 UTC m=+137.175663845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert") pod "console-operator-58897d9998-twpq4" (UID: "0b75fc88-ca92-4fb9-826b-61322c929d1b") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.131868 4580 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.131925 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca podName:fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.131910385 +0000 UTC m=+137.176129065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca") pod "etcd-operator-b45778765-nzcxb" (UID: "fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132447 4580 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132470 4580 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132482 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-session: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132509 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config podName:0b75fc88-ca92-4fb9-826b-61322c929d1b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.132498467 +0000 UTC m=+137.176717157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config") pod "console-operator-58897d9998-twpq4" (UID: "0b75fc88-ca92-4fb9-826b-61322c929d1b") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132525 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.132516651 +0000 UTC m=+137.176735341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132530 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132552 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls podName:5f3179c7-0610-4e19-91cd-9a84d32ac850 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.132544964 +0000 UTC m=+137.176763654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls") pod "machine-config-operator-74547568cd-xntjp" (UID: "5f3179c7-0610-4e19-91cd-9a84d32ac850") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132555 4580 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132568 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.132561075 +0000 UTC m=+137.176779754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132613 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics podName:170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.632598104 +0000 UTC m=+136.676816793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics") pod "marketplace-operator-79b997595-hlckg" (UID: "170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132525 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132648 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.132642125 +0000 UTC m=+137.176860815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132827 4580 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.132868 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client podName:fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.132859062 +0000 UTC m=+137.177077752 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client") pod "etcd-operator-b45778765-nzcxb" (UID: "fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134148 4580 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134193 4580 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134199 4580 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134211 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca podName:0b75fc88-ca92-4fb9-826b-61322c929d1b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.134201054 +0000 UTC m=+137.178419744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca") pod "console-operator-58897d9998-twpq4" (UID: "0b75fc88-ca92-4fb9-826b-61322c929d1b") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134235 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.134225129 +0000 UTC m=+137.178443820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134197 4580 projected.go:263] Couldn't get secret openshift-image-registry/image-registry-tls: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134254 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert podName:15e6f097-ed23-4797-9506-8c95af1dd7f9 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.134248142 +0000 UTC m=+137.178466833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert") pod "kube-apiserver-operator-766d6c64bb-262n7" (UID: "15e6f097-ed23-4797-9506-8c95af1dd7f9") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134265 4580 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-697d97f7c8-hxkcl: failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134311 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls podName:cd2ced26-b320-44a3-aa98-457376b3d8c8 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.634296884 +0000 UTC m=+136.678515574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : failed to sync secret cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134692 4580 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.134732 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca podName:170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.634723243 +0000 UTC m=+136.678941932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca") pod "marketplace-operator-79b997595-hlckg" (UID: "170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.138249 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 12 13:08:57 crc kubenswrapper[4580]: W0112 13:08:57.138455 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3a22133_fac4_42ba_9967_974e82a855aa.slice/crio-847437b89e4f90c8f8448ad4611ff6470e2370a2b52af232bcd8adc533fcbe6c WatchSource:0}: Error finding container 847437b89e4f90c8f8448ad4611ff6470e2370a2b52af232bcd8adc533fcbe6c: Status 404 returned error can't find the container with id 847437b89e4f90c8f8448ad4611ff6470e2370a2b52af232bcd8adc533fcbe6c Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.156935 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.157449 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.657427968 +0000 UTC m=+136.701646659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.158652 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.178835 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.198297 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.218315 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.238772 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.258095 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvq8\" (UniqueName: \"kubernetes.io/projected/257c071c-ccf5-4229-b3b0-65e5b59f5edb-kube-api-access-7mvq8\") pod \"openshift-config-operator-7777fb866f-vpzdt\" (UID: \"257c071c-ccf5-4229-b3b0-65e5b59f5edb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.258515 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.258761 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.258997 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.758984848 +0000 UTC m=+136.803203537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.259394 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.260160 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.76015055 +0000 UTC m=+136.804369240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.284229 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.303001 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.318978 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.344290 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.358813 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.359880 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.359989 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.859970728 +0000 UTC m=+136.904189418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.360504 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.360924 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.860895039 +0000 UTC m=+136.905113729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.378303 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.398277 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.418520 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.438011 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.458923 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.462068 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.462215 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.9621966 +0000 UTC m=+137.006415290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.462903 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.464010 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.963992602 +0000 UTC m=+137.008211293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.469268 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.478284 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.490639 4580 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.490695 4580 projected.go:194] Error preparing data for projected volume kube-api-access-msqlv for pod openshift-authentication/oauth-openshift-558db77b4-8sbrm: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.490764 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv podName:7db5f72b-6a3e-4a3d-96bd-3e10756b605c nodeName:}" failed. No retries permitted until 2026-01-12 13:08:57.990744237 +0000 UTC m=+137.034962927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-msqlv" (UniqueName: "kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv") pod "oauth-openshift-558db77b4-8sbrm" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.507606 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.518492 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.537753 4580 request.go:700] Waited for 1.627908675s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.538760 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.559388 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.564739 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.565328 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krnp\" (UniqueName: \"kubernetes.io/projected/0b75fc88-ca92-4fb9-826b-61322c929d1b-kube-api-access-4krnp\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.565446 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.065429377 +0000 UTC m=+137.109648068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.568596 4580 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.568634 4580 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7: failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.568694 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/15e6f097-ed23-4797-9506-8c95af1dd7f9-kube-api-access podName:15e6f097-ed23-4797-9506-8c95af1dd7f9 nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.068678773 +0000 UTC m=+137.112897463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/15e6f097-ed23-4797-9506-8c95af1dd7f9-kube-api-access") pod "kube-apiserver-operator-766d6c64bb-262n7" (UID: "15e6f097-ed23-4797-9506-8c95af1dd7f9") : failed to sync configmap cache: timed out waiting for the condition Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.578537 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.598447 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.618821 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.639536 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.639982 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt"] Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.658663 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.666414 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.667037 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.667178 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.667306 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.667682 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.167666511 +0000 UTC m=+137.211885201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.667942 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.669460 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.672051 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.679267 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.684501 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdj9k\" (UniqueName: \"kubernetes.io/projected/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-kube-api-access-mdj9k\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.698916 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.719025 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.752674 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84lhw\" (UniqueName: \"kubernetes.io/projected/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-kube-api-access-84lhw\") pod \"marketplace-operator-79b997595-hlckg\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.768933 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.769192 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.269171332 +0000 UTC m=+137.313390023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.769900 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.770319 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.270307241 +0000 UTC m=+137.314525930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.774958 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf7dd\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-kube-api-access-hf7dd\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.791219 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-bound-sa-token\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.818000 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzglc\" (UniqueName: \"kubernetes.io/projected/99294305-b2d1-431b-916d-46f9a599b523-kube-api-access-xzglc\") pod \"service-ca-operator-777779d784-9bm7b\" (UID: \"99294305-b2d1-431b-916d-46f9a599b523\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.834577 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v47pj\" (UniqueName: \"kubernetes.io/projected/df323b67-b600-430b-8712-a278bb06d806-kube-api-access-v47pj\") pod \"machine-config-server-rpc5j\" (UID: \"df323b67-b600-430b-8712-a278bb06d806\") " pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.852346 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29rc\" (UniqueName: \"kubernetes.io/projected/52498b90-7457-4a64-9993-4f58794eecc0-kube-api-access-k29rc\") pod \"packageserver-d55dfcdfc-snhpg\" (UID: \"52498b90-7457-4a64-9993-4f58794eecc0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.853091 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.871169 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.871409 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.371391143 +0000 UTC m=+137.415609834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.871656 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.872029 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.372015383 +0000 UTC m=+137.416234073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.872065 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" event={"ID":"bdbff407-68ae-456c-b67e-40d0e47fba7b","Type":"ContainerStarted","Data":"79a99f7ae34545e0f58214fbd99566fbf96402a6b8618a297dbfa40c0d8f569f"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.872096 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" event={"ID":"bdbff407-68ae-456c-b67e-40d0e47fba7b","Type":"ContainerStarted","Data":"45a3cff125a5bafe4e55f3a33554e802a6f74dc7d142a04b7b01740b1324c5b6"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.872132 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" event={"ID":"bdbff407-68ae-456c-b67e-40d0e47fba7b","Type":"ContainerStarted","Data":"1c1985ce5a0ae2aa8fbe038cb733c0f1d5464f7cdd3c1097f17d62dcf48bd57e"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.873025 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25kzq\" (UniqueName: \"kubernetes.io/projected/9a641a8b-c632-45bd-8606-e3fa10d531b8-kube-api-access-25kzq\") pod \"openshift-apiserver-operator-796bbdcf4f-5b57w\" (UID: \"9a641a8b-c632-45bd-8606-e3fa10d531b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.876296 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" event={"ID":"92864954-4c11-4fae-a089-c8fc35ae755e","Type":"ContainerStarted","Data":"4ed496179c7006cfebe34a13138cec5da411847bf66a4597c599938ef0322f4d"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.876336 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" event={"ID":"92864954-4c11-4fae-a089-c8fc35ae755e","Type":"ContainerStarted","Data":"f7f1fa0ca2a493e141148f728b6ee19c544241503f244ea229ea884df5af4b0e"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.876348 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" event={"ID":"92864954-4c11-4fae-a089-c8fc35ae755e","Type":"ContainerStarted","Data":"d17e2f16d04e648027d52d0c0b89187b230f08508ffe5dfb4dee0c6a9eec3f1a"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.878669 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" event={"ID":"3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9","Type":"ContainerStarted","Data":"933e8d0b54b2a3482c0fe25de386b9dd299d5ed725be5c1c8482a24f03fa485a"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.878721 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" event={"ID":"3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9","Type":"ContainerStarted","Data":"f50ea97225aa3fd210ca1e5ef4e9fcc6530fb6816b048baff4cf1dd601a3156f"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.881651 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" event={"ID":"e3a22133-fac4-42ba-9967-974e82a855aa","Type":"ContainerStarted","Data":"c081e543a2775ece57b9cdf5e30f971e0a483c8f628ee1a88daabaabf4c4bc09"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.881683 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" event={"ID":"e3a22133-fac4-42ba-9967-974e82a855aa","Type":"ContainerStarted","Data":"847437b89e4f90c8f8448ad4611ff6470e2370a2b52af232bcd8adc533fcbe6c"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.882228 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.884740 4580 generic.go:334] "Generic (PLEG): container finished" podID="993fd772-2adc-4e57-8ccd-7bcc86928a21" containerID="0a05e7c53e192269a242e7ea343424e6c2f2772eb947257c3cb1d3ba77ad2be2" exitCode=0 Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.884808 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" event={"ID":"993fd772-2adc-4e57-8ccd-7bcc86928a21","Type":"ContainerDied","Data":"0a05e7c53e192269a242e7ea343424e6c2f2772eb947257c3cb1d3ba77ad2be2"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.890424 4580 generic.go:334] "Generic (PLEG): container finished" podID="257c071c-ccf5-4229-b3b0-65e5b59f5edb" containerID="297a484936b4ba36da107b4678d755e52f20028f9f1c910770ab6db1f468cc86" exitCode=0 Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.890459 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.890487 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" event={"ID":"257c071c-ccf5-4229-b3b0-65e5b59f5edb","Type":"ContainerDied","Data":"297a484936b4ba36da107b4678d755e52f20028f9f1c910770ab6db1f468cc86"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.890508 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" event={"ID":"257c071c-ccf5-4229-b3b0-65e5b59f5edb","Type":"ContainerStarted","Data":"0b59e8f0b31d42c1607f6ea47e5d8d9bd9fe9736db374ac232d721d63e61787a"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.891208 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8bd3ba-56eb-4d09-96e2-61a9308b8bde-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qbtk4\" (UID: \"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.891776 4580 generic.go:334] "Generic (PLEG): container finished" podID="277886cf-d2d4-42e5-b2dc-253fd32648f8" containerID="6e941cee0c275ea78c8897b2085af6da8f28bc29ded2496f6e5ab93c63c145d9" exitCode=0 Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.892223 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" event={"ID":"277886cf-d2d4-42e5-b2dc-253fd32648f8","Type":"ContainerDied","Data":"6e941cee0c275ea78c8897b2085af6da8f28bc29ded2496f6e5ab93c63c145d9"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.892249 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" event={"ID":"277886cf-d2d4-42e5-b2dc-253fd32648f8","Type":"ContainerStarted","Data":"129eec7d415d45c6067974caac0d58e962cb8b842555422a3c6bb02b60e9a4e0"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.894831 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" event={"ID":"12d94033-10bf-43ea-a4de-297df750ad35","Type":"ContainerStarted","Data":"6ffa4d142e5b882a0ca35ef0b7a4d5cd3a7e0ca461f79faddec8523f6fe93de1"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.894862 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" event={"ID":"12d94033-10bf-43ea-a4de-297df750ad35","Type":"ContainerStarted","Data":"de94541faa083c8287deb28a6a638db319fdc89f105ede6319103bef0eb4e106"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.895491 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.900141 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.900416 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.903163 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.904409 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.910162 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" event={"ID":"0e19129d-499f-4d25-ad32-fd3dddb533f2","Type":"ContainerStarted","Data":"4e3c238edf46cd1f5d543cf7ae775f26309e23b8e48677c68df72b9462b78b84"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.910186 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" event={"ID":"0e19129d-499f-4d25-ad32-fd3dddb533f2","Type":"ContainerStarted","Data":"950fb3a006448acd871a780c279bc0cea0c276daa728478b13a4f2bf6ae59465"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.913600 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h47ds\" (UniqueName: \"kubernetes.io/projected/7f8d60b6-387f-4e18-9332-60acade1e93c-kube-api-access-h47ds\") pod \"dns-default-c7ntw\" (UID: \"7f8d60b6-387f-4e18-9332-60acade1e93c\") " pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.913626 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" event={"ID":"fef35e25-bd51-4dd9-8d15-7ce38326982b","Type":"ContainerStarted","Data":"d34c953db3d085aabcec0ebdcad0d767f4dc45029fc922011624704f132296f6"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.913651 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" event={"ID":"fef35e25-bd51-4dd9-8d15-7ce38326982b","Type":"ContainerStarted","Data":"df29be8b0cf210f1f5aeb2ccb4d07bfdfd951bd5ab51e7a7a2376b5bb1a0216b"} Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.940005 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd44m\" (UniqueName: \"kubernetes.io/projected/46cfce88-b8c3-48f9-a957-c6eb80c59166-kube-api-access-dd44m\") pod \"package-server-manager-789f6589d5-s8vg5\" (UID: \"46cfce88-b8c3-48f9-a957-c6eb80c59166\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.952911 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbflp\" (UniqueName: \"kubernetes.io/projected/90150eba-9b4f-485f-97c3-89d410cb5851-kube-api-access-xbflp\") pod \"console-f9d7485db-5tdwv\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.970477 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzmtz\" (UniqueName: \"kubernetes.io/projected/4c9595e5-9b32-4af8-b872-cf027b10a334-kube-api-access-kzmtz\") pod \"authentication-operator-69f744f599-pq2bq\" (UID: \"4c9595e5-9b32-4af8-b872-cf027b10a334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.974199 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.974333 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.474315684 +0000 UTC m=+137.518534374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.974756 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.975050 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" Jan 12 13:08:57 crc kubenswrapper[4580]: E0112 13:08:57.976430 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.476407692 +0000 UTC m=+137.520626381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.980234 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" Jan 12 13:08:57 crc kubenswrapper[4580]: I0112 13:08:57.994690 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvh6\" (UniqueName: \"kubernetes.io/projected/1d75f42a-a600-4c36-9da8-1f91f80336bc-kube-api-access-tcvh6\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.013154 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.016344 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24qsq\" (UniqueName: \"kubernetes.io/projected/037a95c2-1119-4fd8-8499-682fba2f03ea-kube-api-access-24qsq\") pod \"collect-profiles-29470380-nk5n7\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.032661 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxm9\" (UniqueName: \"kubernetes.io/projected/73c37e67-6b89-4830-8723-f6716badcaa4-kube-api-access-gxxm9\") pod \"controller-manager-879f6c89f-cbltx\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.040618 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.062586 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d75f42a-a600-4c36-9da8-1f91f80336bc-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pv5tk\" (UID: \"1d75f42a-a600-4c36-9da8-1f91f80336bc\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.071473 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvlsd\" (UniqueName: \"kubernetes.io/projected/e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1-kube-api-access-xvlsd\") pod \"multus-admission-controller-857f4d67dd-2zrh8\" (UID: \"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.077482 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.077806 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hlckg"] Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.078927 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.578901897 +0000 UTC m=+137.623120586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.079034 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15e6f097-ed23-4797-9506-8c95af1dd7f9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.079620 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqlv\" (UniqueName: \"kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.079673 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.081979 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.581966224 +0000 UTC m=+137.626184914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.085685 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15e6f097-ed23-4797-9506-8c95af1dd7f9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.090812 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c7ntw" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.095630 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqlv\" (UniqueName: \"kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.096878 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rpc5j" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.102412 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a403ac95-5e4f-4234-9c0c-daf0b3831850-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.112661 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dp9p\" (UniqueName: \"kubernetes.io/projected/dafdf187-36fd-4d32-b188-07a5cd4474a9-kube-api-access-7dp9p\") pod \"cluster-samples-operator-665b6dd947-jzjkt\" (UID: \"dafdf187-36fd-4d32-b188-07a5cd4474a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.134990 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-962rs\" (UniqueName: \"kubernetes.io/projected/a403ac95-5e4f-4234-9c0c-daf0b3831850-kube-api-access-962rs\") pod \"cluster-image-registry-operator-dc59b4c8b-qjnxc\" (UID: \"a403ac95-5e4f-4234-9c0c-daf0b3831850\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.142498 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b"] Jan 12 13:08:58 crc kubenswrapper[4580]: W0112 13:08:58.145435 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf323b67_b600_430b_8712_a278bb06d806.slice/crio-6114cbd8e6d69c990b2ee8852e7442952e3b6613318df0fb606ce30d4f089498 WatchSource:0}: Error finding container 6114cbd8e6d69c990b2ee8852e7442952e3b6613318df0fb606ce30d4f089498: Status 404 returned error can't find the container with id 6114cbd8e6d69c990b2ee8852e7442952e3b6613318df0fb606ce30d4f089498 Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.154354 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-928j4\" (UniqueName: \"kubernetes.io/projected/f0bdcb8e-d435-41c5-a140-1b17752fa7ec-kube-api-access-928j4\") pod \"router-default-5444994796-phs5z\" (UID: \"f0bdcb8e-d435-41c5-a140-1b17752fa7ec\") " pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.178510 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.180562 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.180817 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.180846 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.180880 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.180921 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.180960 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.180997 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181020 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181037 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181054 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181084 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181149 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181170 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181198 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181261 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181278 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.181882 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e6f097-ed23-4797-9506-8c95af1dd7f9-config\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.181961 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.681948094 +0000 UTC m=+137.726166784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.182430 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.186767 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f3179c7-0610-4e19-91cd-9a84d32ac850-proxy-tls\") pod \"machine-config-operator-74547568cd-xntjp\" (UID: \"5f3179c7-0610-4e19-91cd-9a84d32ac850\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.188389 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.189781 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-service-ca\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.189836 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-config\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.191434 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.193061 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b75fc88-ca92-4fb9-826b-61322c929d1b-trusted-ca\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.193610 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb-etcd-client\") pod \"etcd-operator-b45778765-nzcxb\" (UID: \"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.194601 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.197276 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15e6f097-ed23-4797-9506-8c95af1dd7f9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-262n7\" (UID: \"15e6f097-ed23-4797-9506-8c95af1dd7f9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.197367 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wcgm\" (UniqueName: \"kubernetes.io/projected/82a887ec-4d3a-4533-aa32-ee1eab68aa86-kube-api-access-4wcgm\") pod \"dns-operator-744455d44c-rs6cr\" (UID: \"82a887ec-4d3a-4533-aa32-ee1eab68aa86\") " pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.197469 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.201727 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.202121 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b75fc88-ca92-4fb9-826b-61322c929d1b-serving-cert\") pod \"console-operator-58897d9998-twpq4\" (UID: \"0b75fc88-ca92-4fb9-826b-61322c929d1b\") " pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.207923 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8sbrm\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.210844 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4cl\" (UniqueName: \"kubernetes.io/projected/405ad898-9997-4efd-b8a8-f878c39784b5-kube-api-access-cv4cl\") pod \"csi-hostpathplugin-zdvz7\" (UID: \"405ad898-9997-4efd-b8a8-f878c39784b5\") " pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.214167 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.214916 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.215676 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9qs\" (UniqueName: \"kubernetes.io/projected/8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56-kube-api-access-tg9qs\") pod \"kube-storage-version-migrator-operator-b67b599dd-klg87\" (UID: \"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.216560 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.238648 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2tb\" (UniqueName: \"kubernetes.io/projected/2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1-kube-api-access-mh2tb\") pod \"downloads-7954f5f757-2hzdj\" (UID: \"2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1\") " pod="openshift-console/downloads-7954f5f757-2hzdj" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.247733 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.255964 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfpnt\" (UniqueName: \"kubernetes.io/projected/b70f0bc6-36e7-4a25-854b-4ca6364e6aa0-kube-api-access-kfpnt\") pod \"ingress-canary-z866m\" (UID: \"b70f0bc6-36e7-4a25-854b-4ca6364e6aa0\") " pod="openshift-ingress-canary/ingress-canary-z866m" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.257207 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hzdj" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.267648 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.275704 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzrqv\" (UniqueName: \"kubernetes.io/projected/a5c054e2-c14d-43cb-a432-ad8e9022b010-kube-api-access-nzrqv\") pod \"service-ca-9c57cc56f-zkcs6\" (UID: \"a5c054e2-c14d-43cb-a432-ad8e9022b010\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.282269 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.282694 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.782679658 +0000 UTC m=+137.826898348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.286629 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.293787 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.296866 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5glbm\" (UniqueName: \"kubernetes.io/projected/78d6fc59-606f-4a88-b7be-467d9c41160d-kube-api-access-5glbm\") pod \"migrator-59844c95c7-ch5j5\" (UID: \"78d6fc59-606f-4a88-b7be-467d9c41160d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.300123 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.310896 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3417afd-f5ef-4c91-990f-22c8a77f2713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-72pwr\" (UID: \"f3417afd-f5ef-4c91-990f-22c8a77f2713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.318232 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.325351 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.336629 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.337498 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbg6s\" (UniqueName: \"kubernetes.io/projected/f5410400-7426-4922-8f12-79e9bb359b58-kube-api-access-cbg6s\") pod \"olm-operator-6b444d44fb-jxm6c\" (UID: \"f5410400-7426-4922-8f12-79e9bb359b58\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.346454 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.353499 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.375262 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.383687 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.384022 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.884006146 +0000 UTC m=+137.928224837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.384460 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z866m" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.386932 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.404085 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.404655 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7"] Jan 12 13:08:58 crc kubenswrapper[4580]: W0112 13:08:58.408358 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0bdcb8e_d435_41c5_a140_1b17752fa7ec.slice/crio-490eb542a332b18d63f0acb131569b2fb18f4db5566ec60f65c32805e62f8d8a WatchSource:0}: Error finding container 490eb542a332b18d63f0acb131569b2fb18f4db5566ec60f65c32805e62f8d8a: Status 404 returned error can't find the container with id 490eb542a332b18d63f0acb131569b2fb18f4db5566ec60f65c32805e62f8d8a Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.425074 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.436931 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.444448 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" Jan 12 13:08:58 crc kubenswrapper[4580]: W0112 13:08:58.446491 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod037a95c2_1119_4fd8_8499_682fba2f03ea.slice/crio-1c3b02a8b21c373c86a3f3bfe8f156b7cff0aec994eb9028f385cc4cd5536202 WatchSource:0}: Error finding container 1c3b02a8b21c373c86a3f3bfe8f156b7cff0aec994eb9028f385cc4cd5536202: Status 404 returned error can't find the container with id 1c3b02a8b21c373c86a3f3bfe8f156b7cff0aec994eb9028f385cc4cd5536202 Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.487632 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.487997 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:58.987979691 +0000 UTC m=+138.032198381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.555728 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.561989 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.589174 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.589545 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.0895199 +0000 UTC m=+138.133738590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.648135 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c7ntw"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.657001 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.658838 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hzdj"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.665965 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pq2bq"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.691179 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.691463 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.191453494 +0000 UTC m=+138.235672184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.739652 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5tdwv"] Jan 12 13:08:58 crc kubenswrapper[4580]: W0112 13:08:58.748745 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b783eb7_ca7b_41db_8342_bfdd6fdfb9b1.slice/crio-b29a57771121e5cf756eb63f7867124d69f3a52a829624f943f82dab33253247 WatchSource:0}: Error finding container b29a57771121e5cf756eb63f7867124d69f3a52a829624f943f82dab33253247: Status 404 returned error can't find the container with id b29a57771121e5cf756eb63f7867124d69f3a52a829624f943f82dab33253247 Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.765452 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.765917 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.781427 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cbltx"] Jan 12 13:08:58 crc kubenswrapper[4580]: W0112 13:08:58.781497 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c9595e5_9b32_4af8_b872_cf027b10a334.slice/crio-049d8cdfad8b636aefaf4e0dfa73375c1423fb1356dedf88f3e3e82c3cfd87f3 WatchSource:0}: Error finding container 049d8cdfad8b636aefaf4e0dfa73375c1423fb1356dedf88f3e3e82c3cfd87f3: Status 404 returned error can't find the container with id 049d8cdfad8b636aefaf4e0dfa73375c1423fb1356dedf88f3e3e82c3cfd87f3 Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.795575 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.795981 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.29596716 +0000 UTC m=+138.340185840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: W0112 13:08:58.805970 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90150eba_9b4f_485f_97c3_89d410cb5851.slice/crio-b584e2aab43e65783d6aa1710d4056697d3a1c077fd756dabc1d53565b6ef7f6 WatchSource:0}: Error finding container b584e2aab43e65783d6aa1710d4056697d3a1c077fd756dabc1d53565b6ef7f6: Status 404 returned error can't find the container with id b584e2aab43e65783d6aa1710d4056697d3a1c077fd756dabc1d53565b6ef7f6 Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.830441 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rs6cr"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.858897 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.882232 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.897300 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.897625 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.397612875 +0000 UTC m=+138.441831566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.915172 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r98fk"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.919650 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.930084 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r98fk"] Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.940455 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hzdj" event={"ID":"2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1","Type":"ContainerStarted","Data":"b29a57771121e5cf756eb63f7867124d69f3a52a829624f943f82dab33253247"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.941458 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.945068 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-phs5z" event={"ID":"f0bdcb8e-d435-41c5-a140-1b17752fa7ec","Type":"ContainerStarted","Data":"490eb542a332b18d63f0acb131569b2fb18f4db5566ec60f65c32805e62f8d8a"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.946412 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rpc5j" event={"ID":"df323b67-b600-430b-8712-a278bb06d806","Type":"ContainerStarted","Data":"7a3b377b8bfd71e21b3b7c490a124b90433e0837b0d6abaa0a1def2215983778"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.946442 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rpc5j" event={"ID":"df323b67-b600-430b-8712-a278bb06d806","Type":"ContainerStarted","Data":"6114cbd8e6d69c990b2ee8852e7442952e3b6613318df0fb606ce30d4f089498"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.948297 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" event={"ID":"46cfce88-b8c3-48f9-a957-c6eb80c59166","Type":"ContainerStarted","Data":"bcdd7e81ce3da0c09c366b36dbd2664c7ef97729b326ead23034e8fc5fd08279"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.950176 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" event={"ID":"99294305-b2d1-431b-916d-46f9a599b523","Type":"ContainerStarted","Data":"a51524b2c96cd52defc0dacd8c67c5d6d6e06646b18bdf00668e54031dad15b8"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.950203 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" event={"ID":"99294305-b2d1-431b-916d-46f9a599b523","Type":"ContainerStarted","Data":"a4017fa208ca39300ec8865e160d5ffcacd64625a7b450854581236c2c34529c"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.960674 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5tdwv" event={"ID":"90150eba-9b4f-485f-97c3-89d410cb5851","Type":"ContainerStarted","Data":"b584e2aab43e65783d6aa1710d4056697d3a1c077fd756dabc1d53565b6ef7f6"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.966254 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" event={"ID":"277886cf-d2d4-42e5-b2dc-253fd32648f8","Type":"ContainerStarted","Data":"f5bf4b97ceab1eaf8b4a55aa56600bc58cea4961987224b895d4ec2f61f98cc8"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.977926 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" event={"ID":"9a641a8b-c632-45bd-8606-e3fa10d531b8","Type":"ContainerStarted","Data":"f2aae5af7cd95002ec7ea4313c256a025cf8cbf31b989f455539248701cc3df0"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.985650 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" event={"ID":"257c071c-ccf5-4229-b3b0-65e5b59f5edb","Type":"ContainerStarted","Data":"974b38585b5d5a2829e3db20f659747b3d7e649705f41c585b49880140cb4f7a"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.985846 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.994212 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" event={"ID":"037a95c2-1119-4fd8-8499-682fba2f03ea","Type":"ContainerStarted","Data":"1c3b02a8b21c373c86a3f3bfe8f156b7cff0aec994eb9028f385cc4cd5536202"} Jan 12 13:08:58 crc kubenswrapper[4580]: I0112 13:08:58.998516 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:58 crc kubenswrapper[4580]: E0112 13:08:58.998926 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.498910941 +0000 UTC m=+138.543129632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.004301 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c7ntw" event={"ID":"7f8d60b6-387f-4e18-9332-60acade1e93c","Type":"ContainerStarted","Data":"8d9de905a47ae92510db7d427a964c1ba1de8ddaa17f427efbd206d674afb83e"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.005865 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" event={"ID":"4c9595e5-9b32-4af8-b872-cf027b10a334","Type":"ContainerStarted","Data":"049d8cdfad8b636aefaf4e0dfa73375c1423fb1356dedf88f3e3e82c3cfd87f3"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.007509 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" event={"ID":"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3","Type":"ContainerStarted","Data":"99b0845d1c96ecd36ed14e772d527fa10f416983d083bf1a071dc1f958a41e6a"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.007555 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" event={"ID":"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3","Type":"ContainerStarted","Data":"220f5c0d687656afa3a771f386a79ca25a0f129f9c6c1dd92e51847fc60e37ee"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.007903 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.011090 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" event={"ID":"52498b90-7457-4a64-9993-4f58794eecc0","Type":"ContainerStarted","Data":"12c883d7ee9d43d283a0110ab4215c1880b1fdd5c1efba848b03b975f78b68c9"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.011127 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" event={"ID":"52498b90-7457-4a64-9993-4f58794eecc0","Type":"ContainerStarted","Data":"e46f6d71cfdb4141dffd21363fdb5354384ce1768390f85e382a48cadfec8cad"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.011306 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.014219 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" event={"ID":"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde","Type":"ContainerStarted","Data":"2a2f0c8874a0b13dc540c037db76e6c0b6e95c06a4d016d3b0d4e9b171ddc5a6"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.014246 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" event={"ID":"cf8bd3ba-56eb-4d09-96e2-61a9308b8bde","Type":"ContainerStarted","Data":"6bfbdc239a1a27a84a969713dfcb77ea16c814c45a1ef93a08f4f0f986610e43"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.017667 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" event={"ID":"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56","Type":"ContainerStarted","Data":"82e4090feb26abaec4e14fd03d729339766ce694c576f72d7b48237d3d686aca"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.025839 4580 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hlckg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.025875 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" podUID="170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.026160 4580 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-snhpg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.026181 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" podUID="52498b90-7457-4a64-9993-4f58794eecc0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.027284 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" podStartSLOduration=119.027273331 podStartE2EDuration="1m59.027273331s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:59.02604994 +0000 UTC m=+138.070268630" watchObservedRunningTime="2026-01-12 13:08:59.027273331 +0000 UTC m=+138.071492021" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.040355 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" event={"ID":"993fd772-2adc-4e57-8ccd-7bcc86928a21","Type":"ContainerStarted","Data":"ed9a98a1436dbab2f60614765f0e26090497ad2ed41bfdbbdaa0a8ede42da7ff"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.040460 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" event={"ID":"993fd772-2adc-4e57-8ccd-7bcc86928a21","Type":"ContainerStarted","Data":"a17fdcdf1344be30bb7d1cbb7eb01d61766ac6905f26586253dfeac637ff2703"} Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.099753 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-utilities\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.099914 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-catalog-content\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.099999 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klwrq\" (UniqueName: \"kubernetes.io/projected/10b26e31-b5d9-491a-863a-1cc0a102eae8-kube-api-access-klwrq\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.100209 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.103236 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.603224883 +0000 UTC m=+138.647443573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.108641 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jbnkd" podStartSLOduration=119.108631467 podStartE2EDuration="1m59.108631467s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:59.107029357 +0000 UTC m=+138.151248047" watchObservedRunningTime="2026-01-12 13:08:59.108631467 +0000 UTC m=+138.152850157" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.117047 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2l4gk"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.118267 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.129265 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.138932 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.139073 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2zrh8"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.140292 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2l4gk"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.201613 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.201930 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-utilities\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.202078 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-catalog-content\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.202143 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klwrq\" (UniqueName: \"kubernetes.io/projected/10b26e31-b5d9-491a-863a-1cc0a102eae8-kube-api-access-klwrq\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.202695 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-utilities\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.203127 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.703094403 +0000 UTC m=+138.747313093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.204246 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-catalog-content\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.259007 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klwrq\" (UniqueName: \"kubernetes.io/projected/10b26e31-b5d9-491a-863a-1cc0a102eae8-kube-api-access-klwrq\") pod \"certified-operators-r98fk\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.273492 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" podStartSLOduration=120.273476316 podStartE2EDuration="2m0.273476316s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:59.270646957 +0000 UTC m=+138.314865637" watchObservedRunningTime="2026-01-12 13:08:59.273476316 +0000 UTC m=+138.317695006" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.303984 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.304075 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-catalog-content\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.304120 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-utilities\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.304235 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwx9p\" (UniqueName: \"kubernetes.io/projected/eb8c503e-0907-40aa-a053-72d38311b08e-kube-api-access-mwx9p\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.304587 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.804574287 +0000 UTC m=+138.848792978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.320403 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bm7b" podStartSLOduration=119.320391637 podStartE2EDuration="1m59.320391637s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:59.319327014 +0000 UTC m=+138.363545704" watchObservedRunningTime="2026-01-12 13:08:59.320391637 +0000 UTC m=+138.364610317" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.322591 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drwn8"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.323511 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.351149 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drwn8"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.406675 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.407042 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwx9p\" (UniqueName: \"kubernetes.io/projected/eb8c503e-0907-40aa-a053-72d38311b08e-kube-api-access-mwx9p\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.407183 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-catalog-content\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.407214 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-utilities\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.407765 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-utilities\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.407857 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:08:59.907838274 +0000 UTC m=+138.952056964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.408830 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-catalog-content\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.409924 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr76c" podStartSLOduration=120.40990841 podStartE2EDuration="2m0.40990841s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:59.361267779 +0000 UTC m=+138.405486469" watchObservedRunningTime="2026-01-12 13:08:59.40990841 +0000 UTC m=+138.454127100" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.448804 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwx9p\" (UniqueName: \"kubernetes.io/projected/eb8c503e-0907-40aa-a053-72d38311b08e-kube-api-access-mwx9p\") pod \"community-operators-2l4gk\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.486249 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.511742 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-catalog-content\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.512207 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-utilities\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.512255 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lft6q\" (UniqueName: \"kubernetes.io/projected/adf8dbed-2e00-49c7-90f3-62f85ef5e078-kube-api-access-lft6q\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.512360 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.513096 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.013065716 +0000 UTC m=+139.057284407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.563004 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.569350 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjqmq"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.570957 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z866m"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.571145 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.573153 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.583424 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjqmq"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.584173 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz9sn" podStartSLOduration=120.584159723 podStartE2EDuration="2m0.584159723s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:59.575734447 +0000 UTC m=+138.619953127" watchObservedRunningTime="2026-01-12 13:08:59.584159723 +0000 UTC m=+138.628378414" Jan 12 13:08:59 crc kubenswrapper[4580]: W0112 13:08:59.600838 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15e6f097_ed23_4797_9506_8c95af1dd7f9.slice/crio-8dccd9064ebd256aee1d830237c13e2a33a22fe4ca62de810aa21d5dd39b3ccb WatchSource:0}: Error finding container 8dccd9064ebd256aee1d830237c13e2a33a22fe4ca62de810aa21d5dd39b3ccb: Status 404 returned error can't find the container with id 8dccd9064ebd256aee1d830237c13e2a33a22fe4ca62de810aa21d5dd39b3ccb Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.607947 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twpq4"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.608007 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzcxb"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.618048 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.618248 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lft6q\" (UniqueName: \"kubernetes.io/projected/adf8dbed-2e00-49c7-90f3-62f85ef5e078-kube-api-access-lft6q\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.618337 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-catalog-content\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.618386 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-utilities\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.619080 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-utilities\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.619424 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.119401044 +0000 UTC m=+139.163619735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.619924 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-catalog-content\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.638242 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lft6q\" (UniqueName: \"kubernetes.io/projected/adf8dbed-2e00-49c7-90f3-62f85ef5e078-kube-api-access-lft6q\") pod \"certified-operators-drwn8\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.647157 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-89mg9" podStartSLOduration=119.647145927 podStartE2EDuration="1m59.647145927s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:08:59.631188847 +0000 UTC m=+138.675407537" watchObservedRunningTime="2026-01-12 13:08:59.647145927 +0000 UTC m=+138.691364617" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.647762 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zkcs6"] Jan 12 13:08:59 crc kubenswrapper[4580]: W0112 13:08:59.665295 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb691bf2_3f8a_4b11_977f_8a77ad9ad9bb.slice/crio-8806ccd878be46c8466a1fd882da9441d3323f9bdf62c63944934657598e09f6 WatchSource:0}: Error finding container 8806ccd878be46c8466a1fd882da9441d3323f9bdf62c63944934657598e09f6: Status 404 returned error can't find the container with id 8806ccd878be46c8466a1fd882da9441d3323f9bdf62c63944934657598e09f6 Jan 12 13:08:59 crc kubenswrapper[4580]: W0112 13:08:59.683639 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5c054e2_c14d_43cb_a432_ad8e9022b010.slice/crio-437e37766d0004c172d934bfee2e8d726e2ff312e9b69b064735389b615735a7 WatchSource:0}: Error finding container 437e37766d0004c172d934bfee2e8d726e2ff312e9b69b064735389b615735a7: Status 404 returned error can't find the container with id 437e37766d0004c172d934bfee2e8d726e2ff312e9b69b064735389b615735a7 Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.719221 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmj47\" (UniqueName: \"kubernetes.io/projected/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-kube-api-access-fmj47\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.719333 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-utilities\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.719366 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.719384 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-catalog-content\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.719985 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.219974883 +0000 UTC m=+139.264193573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.724403 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sbrm"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.730343 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.737135 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zdvz7"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.778324 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.800059 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr"] Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.819845 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.820182 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-utilities\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.820228 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-catalog-content\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.820305 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmj47\" (UniqueName: \"kubernetes.io/projected/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-kube-api-access-fmj47\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.820559 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.320542169 +0000 UTC m=+139.364760860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.820863 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-utilities\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.821062 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-catalog-content\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: W0112 13:08:59.848408 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f3179c7_0610_4e19_91cd_9a84d32ac850.slice/crio-269ecdf2d52fa6b5157ef47a9cafa9d63ae5e1919ef7fc12e68f5534f29d1abc WatchSource:0}: Error finding container 269ecdf2d52fa6b5157ef47a9cafa9d63ae5e1919ef7fc12e68f5534f29d1abc: Status 404 returned error can't find the container with id 269ecdf2d52fa6b5157ef47a9cafa9d63ae5e1919ef7fc12e68f5534f29d1abc Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.858436 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.867061 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmj47\" (UniqueName: \"kubernetes.io/projected/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-kube-api-access-fmj47\") pod \"community-operators-gjqmq\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:08:59 crc kubenswrapper[4580]: I0112 13:08:59.931628 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:08:59 crc kubenswrapper[4580]: E0112 13:08:59.932510 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.432495676 +0000 UTC m=+139.476714367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.041084 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.041897 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.042351 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.542328884 +0000 UTC m=+139.586547574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.045621 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.046062 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.546049541 +0000 UTC m=+139.590268231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.074236 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r98fk"] Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.093459 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" podStartSLOduration=120.093448017 podStartE2EDuration="2m0.093448017s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.09215197 +0000 UTC m=+139.136370650" watchObservedRunningTime="2026-01-12 13:09:00.093448017 +0000 UTC m=+139.137666707" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.098179 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" event={"ID":"9a641a8b-c632-45bd-8606-e3fa10d531b8","Type":"ContainerStarted","Data":"5e6da551808cbf7fe394a58efdca47c494002201c5b766f5a8622d9b23790185"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.105918 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" event={"ID":"73c37e67-6b89-4830-8723-f6716badcaa4","Type":"ContainerStarted","Data":"2a8dfc4d1c0473219ffec89e724fe4978621452704d470978627e6bd00bc21ee"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.105978 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" event={"ID":"73c37e67-6b89-4830-8723-f6716badcaa4","Type":"ContainerStarted","Data":"608975d1379407e7ac6ff33943b3e60b89a9f1ecd6f6df40e6e71b2500749d08"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.106985 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.110325 4580 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cbltx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.110368 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" podUID="73c37e67-6b89-4830-8723-f6716badcaa4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.127317 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c7ntw" event={"ID":"7f8d60b6-387f-4e18-9332-60acade1e93c","Type":"ContainerStarted","Data":"22bba2a34aa4deeb1e66dfd9e0d5f4f70bbd359363f14846d201d0bbc5736ce3"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.148733 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.149445 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.649429364 +0000 UTC m=+139.693648054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.149587 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-twpq4" event={"ID":"0b75fc88-ca92-4fb9-826b-61322c929d1b","Type":"ContainerStarted","Data":"041844e2aefe7656a0262f65a83f0d741905e8db9e05513ab5e633a5b01c8d3f"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.169872 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.180157 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" event={"ID":"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1","Type":"ContainerStarted","Data":"a0a52da65b2223fb407d14a51d8a6143d2cfeb67aa402ee7154b5a0a8ec3448a"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.180422 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" event={"ID":"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1","Type":"ContainerStarted","Data":"1cc219fcbc5c566bfe1f91fd624c65fe6b41a80a21aa9defa72715421831516b"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.197050 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5tdwv" event={"ID":"90150eba-9b4f-485f-97c3-89d410cb5851","Type":"ContainerStarted","Data":"50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.212856 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" event={"ID":"a403ac95-5e4f-4234-9c0c-daf0b3831850","Type":"ContainerStarted","Data":"b0dda5fc093ca4448fcd2407abbc4afcac418ac288c25ba00df3ff397a707769"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.212898 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" event={"ID":"a403ac95-5e4f-4234-9c0c-daf0b3831850","Type":"ContainerStarted","Data":"3b25dba6655f72cacd6a13c9e5d5c160681bcfbb27ec17cda07282aa4fa1d689"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.225443 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2l4gk"] Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.231702 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn56v" podStartSLOduration=120.231689881 podStartE2EDuration="2m0.231689881s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.221739227 +0000 UTC m=+139.265957917" watchObservedRunningTime="2026-01-12 13:09:00.231689881 +0000 UTC m=+139.275908570" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.250667 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.252226 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.752208972 +0000 UTC m=+139.796427663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.267794 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" event={"ID":"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb","Type":"ContainerStarted","Data":"8806ccd878be46c8466a1fd882da9441d3323f9bdf62c63944934657598e09f6"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.278367 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" event={"ID":"82a887ec-4d3a-4533-aa32-ee1eab68aa86","Type":"ContainerStarted","Data":"2d86c5ecb2a3da0da72150b2f44cb3d2e4b818e6625f37f74e4102ab83700cfe"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.278393 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" event={"ID":"82a887ec-4d3a-4533-aa32-ee1eab68aa86","Type":"ContainerStarted","Data":"d4d0e364c2d75f5784513712dd1668579dbafb8451df95b5ff48814eb9707942"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.301382 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" event={"ID":"8fe2afba-2d6f-47d7-83c1-aa3fc9fa7c56","Type":"ContainerStarted","Data":"d0b0fbf16957d4a473e5fc71ca423825b1db225545c952c60e4ae0fe9e38840c"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.329944 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hzdj" event={"ID":"2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1","Type":"ContainerStarted","Data":"51d218eed1fa8cd7a6eb7b0f96dd9a6e0a9449cef20b1dc70f365d1b27cd5d7e"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.330649 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hzdj" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.340862 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" event={"ID":"7db5f72b-6a3e-4a3d-96bd-3e10756b605c","Type":"ContainerStarted","Data":"8af755dc9b711216d5e9676aa78b28350b03a68f5eaea8c8136bcc92dfb9880d"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.372234 4580 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hzdj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.372271 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hzdj" podUID="2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.372741 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.372844 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.872823298 +0000 UTC m=+139.917041979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.373034 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.374470 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.874452349 +0000 UTC m=+139.918671039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.387018 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" event={"ID":"f3417afd-f5ef-4c91-990f-22c8a77f2713","Type":"ContainerStarted","Data":"782e2d9b6d4bb3b7827a2a94626aadc9f2d6f77a4fa6d50b7c2f5697fecb3d00"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.389671 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" event={"ID":"15e6f097-ed23-4797-9506-8c95af1dd7f9","Type":"ContainerStarted","Data":"8dccd9064ebd256aee1d830237c13e2a33a22fe4ca62de810aa21d5dd39b3ccb"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.414856 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-phs5z" event={"ID":"f0bdcb8e-d435-41c5-a140-1b17752fa7ec","Type":"ContainerStarted","Data":"3c6d1c53931b0f78374d867bbc601cb4ffe2451e830a21bac5e1d66ed7ed260b"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.429647 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" event={"ID":"5f3179c7-0610-4e19-91cd-9a84d32ac850","Type":"ContainerStarted","Data":"269ecdf2d52fa6b5157ef47a9cafa9d63ae5e1919ef7fc12e68f5534f29d1abc"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.464831 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rpc5j" podStartSLOduration=6.464810849 podStartE2EDuration="6.464810849s" podCreationTimestamp="2026-01-12 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.460839773 +0000 UTC m=+139.505058463" watchObservedRunningTime="2026-01-12 13:09:00.464810849 +0000 UTC m=+139.509029539" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.467924 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drwn8"] Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.482770 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.483683 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:00.983666176 +0000 UTC m=+140.027884866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.510803 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" event={"ID":"f5410400-7426-4922-8f12-79e9bb359b58","Type":"ContainerStarted","Data":"180b56b4c5ad51aa4d2e202fe758d1086c41154d9f7cb63739e7df51374b80f4"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.510857 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" event={"ID":"f5410400-7426-4922-8f12-79e9bb359b58","Type":"ContainerStarted","Data":"f5aff4d4cc5f28d5b699d3bcfa326e581bb38063520ce6f58356405aba42a88b"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.512301 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.512818 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfcct" podStartSLOduration=120.512803568 podStartE2EDuration="2m0.512803568s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.512219643 +0000 UTC m=+139.556438334" watchObservedRunningTime="2026-01-12 13:09:00.512803568 +0000 UTC m=+139.557022258" Jan 12 13:09:00 crc kubenswrapper[4580]: W0112 13:09:00.524877 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf8dbed_2e00_49c7_90f3_62f85ef5e078.slice/crio-a8f807914a3538b53b53c02bc7273ac4a4903ab777c732abb418df65895499bf WatchSource:0}: Error finding container a8f807914a3538b53b53c02bc7273ac4a4903ab777c732abb418df65895499bf: Status 404 returned error can't find the container with id a8f807914a3538b53b53c02bc7273ac4a4903ab777c732abb418df65895499bf Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.534648 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.538021 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" event={"ID":"037a95c2-1119-4fd8-8499-682fba2f03ea","Type":"ContainerStarted","Data":"429218504b15d49c71ec491ade7f77e78c38b8310607579b22cd67a199946598"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.558528 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z866m" event={"ID":"b70f0bc6-36e7-4a25-854b-4ca6364e6aa0","Type":"ContainerStarted","Data":"cea48ca004bc887a9f8059b78a58c6a8e81c64c454588ca4de64b210e74561d3"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.575312 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qbtk4" podStartSLOduration=120.575299643 podStartE2EDuration="2m0.575299643s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.575140897 +0000 UTC m=+139.619359587" watchObservedRunningTime="2026-01-12 13:09:00.575299643 +0000 UTC m=+139.619518333" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.584279 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.585296 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.085277207 +0000 UTC m=+140.129495897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.617680 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" event={"ID":"dafdf187-36fd-4d32-b188-07a5cd4474a9","Type":"ContainerStarted","Data":"b274be9c9278c5f47b1c7d9646d515a41edf2cb343cbb734ccdac71e35f43ed9"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.624418 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" podStartSLOduration=120.624398342 podStartE2EDuration="2m0.624398342s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.622166023 +0000 UTC m=+139.666384712" watchObservedRunningTime="2026-01-12 13:09:00.624398342 +0000 UTC m=+139.668617033" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.637651 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" event={"ID":"46cfce88-b8c3-48f9-a957-c6eb80c59166","Type":"ContainerStarted","Data":"573cbb399112972f5dd844bbdc3f30d93f7da60815b7f7e543452dfdc843760f"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.638162 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.659114 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" event={"ID":"4c9595e5-9b32-4af8-b872-cf027b10a334","Type":"ContainerStarted","Data":"bedc32261d1ff7efc80fb5b8f6c4fd9e5a8ec57d4d0475789272fe14fd0021db"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.667922 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" event={"ID":"a5c054e2-c14d-43cb-a432-ad8e9022b010","Type":"ContainerStarted","Data":"437e37766d0004c172d934bfee2e8d726e2ff312e9b69b064735389b615735a7"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.668723 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qjnxc" podStartSLOduration=120.668707063 podStartE2EDuration="2m0.668707063s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.658012998 +0000 UTC m=+139.702231688" watchObservedRunningTime="2026-01-12 13:09:00.668707063 +0000 UTC m=+139.712925743" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.677321 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" event={"ID":"405ad898-9997-4efd-b8a8-f878c39784b5","Type":"ContainerStarted","Data":"dc6a774de74f1f13ed579236f98ef51406113a9dab9bc41be315e22f91ed8df9"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.681369 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" event={"ID":"1d75f42a-a600-4c36-9da8-1f91f80336bc","Type":"ContainerStarted","Data":"fb11831765013c2ee13e43917dfc46274aeff471923d026acb8f5a5f5858a500"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.681405 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" event={"ID":"1d75f42a-a600-4c36-9da8-1f91f80336bc","Type":"ContainerStarted","Data":"384eb4458a5079c02771ade89e879a5579f4d8c43589dab7affeb616f69a6593"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.686350 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.687046 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.187025996 +0000 UTC m=+140.231244686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.688216 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.689088 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.18907351 +0000 UTC m=+140.233292199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.718064 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" event={"ID":"78d6fc59-606f-4a88-b7be-467d9c41160d","Type":"ContainerStarted","Data":"438c14460f12c52fc605af67d1ebb37c43c621cb39ba484e1a126437ddd45131"} Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.728163 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.731528 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2hzdj" podStartSLOduration=121.731489496 podStartE2EDuration="2m1.731489496s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.70529238 +0000 UTC m=+139.749511071" watchObservedRunningTime="2026-01-12 13:09:00.731489496 +0000 UTC m=+139.775708186" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.746090 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.793249 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.793594 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.293572799 +0000 UTC m=+140.337791489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.794536 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.803020 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.303000182 +0000 UTC m=+140.347218872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.818137 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-phs5z" podStartSLOduration=120.818117109 podStartE2EDuration="2m0.818117109s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.776503626 +0000 UTC m=+139.820722316" watchObservedRunningTime="2026-01-12 13:09:00.818117109 +0000 UTC m=+139.862335789" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.819538 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-snhpg" podStartSLOduration=120.819530506 podStartE2EDuration="2m0.819530506s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.816036513 +0000 UTC m=+139.860255204" watchObservedRunningTime="2026-01-12 13:09:00.819530506 +0000 UTC m=+139.863749196" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.893128 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" podStartSLOduration=121.893110487 podStartE2EDuration="2m1.893110487s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.860396409 +0000 UTC m=+139.904615099" watchObservedRunningTime="2026-01-12 13:09:00.893110487 +0000 UTC m=+139.937329177" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.895069 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxm6c" podStartSLOduration=120.895047675 podStartE2EDuration="2m0.895047675s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.892551972 +0000 UTC m=+139.936770651" watchObservedRunningTime="2026-01-12 13:09:00.895047675 +0000 UTC m=+139.939266365" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.899540 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:00 crc kubenswrapper[4580]: E0112 13:09:00.901173 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.401086943 +0000 UTC m=+140.445305633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.949310 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" podStartSLOduration=120.949294695 podStartE2EDuration="2m0.949294695s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:00.944633907 +0000 UTC m=+139.988852587" watchObservedRunningTime="2026-01-12 13:09:00.949294695 +0000 UTC m=+139.993513375" Jan 12 13:09:00 crc kubenswrapper[4580]: I0112 13:09:00.967224 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjqmq"] Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.003552 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.003910 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.503898643 +0000 UTC m=+140.548117333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.018225 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5tdwv" podStartSLOduration=122.018208569 podStartE2EDuration="2m2.018208569s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:01.017583659 +0000 UTC m=+140.061802349" watchObservedRunningTime="2026-01-12 13:09:01.018208569 +0000 UTC m=+140.062427259" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.067750 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5b57w" podStartSLOduration=121.067734979 podStartE2EDuration="2m1.067734979s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:01.063012616 +0000 UTC m=+140.107231306" watchObservedRunningTime="2026-01-12 13:09:01.067734979 +0000 UTC m=+140.111953669" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.105665 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.106659 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.606635241 +0000 UTC m=+140.650853931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.164455 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbkw4"] Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.165494 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.177695 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.200202 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbkw4"] Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.210410 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.210794 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.710778354 +0000 UTC m=+140.754997044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.299707 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" podStartSLOduration=121.299689313 podStartE2EDuration="2m1.299689313s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:01.28349591 +0000 UTC m=+140.327714600" watchObservedRunningTime="2026-01-12 13:09:01.299689313 +0000 UTC m=+140.343908003" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.311559 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.312002 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-utilities\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.312037 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9qk\" (UniqueName: \"kubernetes.io/projected/0fb0ae3e-224b-4dba-8e3d-783df7049f05-kube-api-access-5f9qk\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.312097 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-catalog-content\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.312305 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.812276032 +0000 UTC m=+140.856494723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.321344 4580 patch_prober.go:28] interesting pod/router-default-5444994796-phs5z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 12 13:09:01 crc kubenswrapper[4580]: [-]has-synced failed: reason withheld Jan 12 13:09:01 crc kubenswrapper[4580]: [+]process-running ok Jan 12 13:09:01 crc kubenswrapper[4580]: healthz check failed Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.342186 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-phs5z" podUID="f0bdcb8e-d435-41c5-a140-1b17752fa7ec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.325480 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.366068 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-klg87" podStartSLOduration=121.366050708 podStartE2EDuration="2m1.366050708s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:01.365545562 +0000 UTC m=+140.409764252" watchObservedRunningTime="2026-01-12 13:09:01.366050708 +0000 UTC m=+140.410269398" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.414319 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-utilities\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.414351 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9qk\" (UniqueName: \"kubernetes.io/projected/0fb0ae3e-224b-4dba-8e3d-783df7049f05-kube-api-access-5f9qk\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.414388 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.414410 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-catalog-content\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.414794 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-catalog-content\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.415015 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-utilities\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.418425 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:01.918412076 +0000 UTC m=+140.962630766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.456062 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9qk\" (UniqueName: \"kubernetes.io/projected/0fb0ae3e-224b-4dba-8e3d-783df7049f05-kube-api-access-5f9qk\") pod \"redhat-marketplace-jbkw4\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.492888 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pq2bq" podStartSLOduration=121.492872506 podStartE2EDuration="2m1.492872506s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:01.491343012 +0000 UTC m=+140.535561702" watchObservedRunningTime="2026-01-12 13:09:01.492872506 +0000 UTC m=+140.537091197" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.515362 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.515830 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.015817131 +0000 UTC m=+141.060035821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.537697 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.537754 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.541167 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cckj6"] Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.542168 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.555387 4580 patch_prober.go:28] interesting pod/apiserver-76f77b778f-mw8xc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]log ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]etcd ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/generic-apiserver-start-informers ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/max-in-flight-filter ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 12 13:09:01 crc kubenswrapper[4580]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/project.openshift.io-projectcache ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/openshift.io-startinformers ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 12 13:09:01 crc kubenswrapper[4580]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 12 13:09:01 crc kubenswrapper[4580]: livez check failed Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.555441 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" podUID="993fd772-2adc-4e57-8ccd-7bcc86928a21" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.557235 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckj6"] Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.566707 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z866m" podStartSLOduration=7.566691085 podStartE2EDuration="7.566691085s" podCreationTimestamp="2026-01-12 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:01.566558105 +0000 UTC m=+140.610776796" watchObservedRunningTime="2026-01-12 13:09:01.566691085 +0000 UTC m=+140.610909775" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.616597 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-utilities\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.616686 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.616716 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-catalog-content\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.616762 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6w56\" (UniqueName: \"kubernetes.io/projected/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-kube-api-access-v6w56\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.617064 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.117051537 +0000 UTC m=+141.161270227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.628157 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" podStartSLOduration=121.628138327 podStartE2EDuration="2m1.628138327s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:01.625651912 +0000 UTC m=+140.669870602" watchObservedRunningTime="2026-01-12 13:09:01.628138327 +0000 UTC m=+140.672357017" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.674554 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.689478 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" podStartSLOduration=121.689460746 podStartE2EDuration="2m1.689460746s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:01.68808521 +0000 UTC m=+140.732303900" watchObservedRunningTime="2026-01-12 13:09:01.689460746 +0000 UTC m=+140.733679436" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.717546 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.717725 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-catalog-content\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.717789 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6w56\" (UniqueName: \"kubernetes.io/projected/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-kube-api-access-v6w56\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.717846 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-utilities\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.718222 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-utilities\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.718294 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.218281224 +0000 UTC m=+141.262499913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.718524 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-catalog-content\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.749577 4580 generic.go:334] "Generic (PLEG): container finished" podID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerID="323c587f2b49555ead9c3f16f974e23314a3480a0f6e33b7a8ad71315ae4501d" exitCode=0 Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.749973 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwn8" event={"ID":"adf8dbed-2e00-49c7-90f3-62f85ef5e078","Type":"ContainerDied","Data":"323c587f2b49555ead9c3f16f974e23314a3480a0f6e33b7a8ad71315ae4501d"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.750041 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwn8" event={"ID":"adf8dbed-2e00-49c7-90f3-62f85ef5e078","Type":"ContainerStarted","Data":"a8f807914a3538b53b53c02bc7273ac4a4903ab777c732abb418df65895499bf"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.755246 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.759276 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6w56\" (UniqueName: \"kubernetes.io/projected/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-kube-api-access-v6w56\") pod \"redhat-marketplace-cckj6\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.767336 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" event={"ID":"15e6f097-ed23-4797-9506-8c95af1dd7f9","Type":"ContainerStarted","Data":"a607bd26ad8e7e199bcc9f3419353b01ccaa5a0199fc4389df864fc419dde6a5"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.784630 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" event={"ID":"78d6fc59-606f-4a88-b7be-467d9c41160d","Type":"ContainerStarted","Data":"0222f5fc1aac48c60210e4fa17a258f3acd3adf3c98285bbf64ef9eefb99f994"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.784674 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" event={"ID":"78d6fc59-606f-4a88-b7be-467d9c41160d","Type":"ContainerStarted","Data":"401d8533077fc0fad84fc37298f2b79cfe48fabbd6ea30427afd1ee8258bac93"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.788913 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z866m" event={"ID":"b70f0bc6-36e7-4a25-854b-4ca6364e6aa0","Type":"ContainerStarted","Data":"1e52321aeb593809840be73410441748f5420ef900df9663e61726aed9b1463a"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.820789 4580 generic.go:334] "Generic (PLEG): container finished" podID="eb8c503e-0907-40aa-a053-72d38311b08e" containerID="1cf789010390759d1c5a8ec178ab42bbbefddf7b6afd81b30458cd658603797c" exitCode=0 Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.820883 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4gk" event={"ID":"eb8c503e-0907-40aa-a053-72d38311b08e","Type":"ContainerDied","Data":"1cf789010390759d1c5a8ec178ab42bbbefddf7b6afd81b30458cd658603797c"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.820923 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4gk" event={"ID":"eb8c503e-0907-40aa-a053-72d38311b08e","Type":"ContainerStarted","Data":"0a62d7903743b02b9a1866ab21f4c9040a949e36aaa01a5b1a0db81e6f7a5b88"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.825453 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.826568 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.326553157 +0000 UTC m=+141.370771847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.849313 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.849478 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.850537 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zkcs6" event={"ID":"a5c054e2-c14d-43cb-a432-ad8e9022b010","Type":"ContainerStarted","Data":"63a5c6ed5ed4d4ef070996e26089a3817682d1fdcde444647f40821c8d7eb2b1"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.857698 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" event={"ID":"5f3179c7-0610-4e19-91cd-9a84d32ac850","Type":"ContainerStarted","Data":"c77673757f01b5fc1bbe16fcb6b41426ab7a733b420680572494666c1e7a1dcf"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.857726 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" event={"ID":"5f3179c7-0610-4e19-91cd-9a84d32ac850","Type":"ContainerStarted","Data":"2adf5967ac254bd7c21ccef1760d5d532dfb194cfd83d7b01eee43fefa9cebee"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.859431 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" event={"ID":"1d75f42a-a600-4c36-9da8-1f91f80336bc","Type":"ContainerStarted","Data":"831d959f1304da9a26f0a6cc3c8c93e99e2e156fa037daeb96cab721530e18e1"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.874559 4580 generic.go:334] "Generic (PLEG): container finished" podID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerID="6d9d157ced800fda22147f0e28254351ac60fe58e22a7235cafea546d96842f8" exitCode=0 Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.874628 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r98fk" event={"ID":"10b26e31-b5d9-491a-863a-1cc0a102eae8","Type":"ContainerDied","Data":"6d9d157ced800fda22147f0e28254351ac60fe58e22a7235cafea546d96842f8"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.874653 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r98fk" event={"ID":"10b26e31-b5d9-491a-863a-1cc0a102eae8","Type":"ContainerStarted","Data":"d7a8f303b0c0f45ad972e668d5dbc0110cef01b3b1e5e1fe47afa9658bcfd71a"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.885177 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.950016 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.950896 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.964890 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-twpq4" event={"ID":"0b75fc88-ca92-4fb9-826b-61322c929d1b","Type":"ContainerStarted","Data":"2965b3da11aff54a140edd88fe0b964cdce5a3deef6e27bced2b07bcac9d1fe3"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.965456 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.979734 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" event={"ID":"e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1","Type":"ContainerStarted","Data":"1162d6aab4f375d07f6e7f4ebb4d4053940ba51aeb10c0656648750ca4803912"} Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.980481 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.480459352 +0000 UTC m=+141.524678043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.981899 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:01 crc kubenswrapper[4580]: E0112 13:09:01.985570 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.485553642 +0000 UTC m=+141.529772332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.989819 4580 generic.go:334] "Generic (PLEG): container finished" podID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerID="113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33" exitCode=0 Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.989904 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqmq" event={"ID":"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064","Type":"ContainerDied","Data":"113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33"} Jan 12 13:09:01 crc kubenswrapper[4580]: I0112 13:09:01.989930 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqmq" event={"ID":"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064","Type":"ContainerStarted","Data":"92b1be27d43c1372582ea8f85cf5d3c8e343c33f73938e78c2953564b7665836"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.020780 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" event={"ID":"fb691bf2-3f8a-4b11-977f-8a77ad9ad9bb","Type":"ContainerStarted","Data":"f028c7b92884aa2bef634d4d0eb40c033af39a41e72ab8daaf9c5ba673f2658a"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.024194 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xntjp" podStartSLOduration=122.024168089 podStartE2EDuration="2m2.024168089s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.016728658 +0000 UTC m=+141.060947349" watchObservedRunningTime="2026-01-12 13:09:02.024168089 +0000 UTC m=+141.068386779" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.042575 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" event={"ID":"82a887ec-4d3a-4533-aa32-ee1eab68aa86","Type":"ContainerStarted","Data":"5c7e1704158bcdd61d18d5137a04063d860353b872b55932310bd516a040e88c"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.054642 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c7ntw" event={"ID":"7f8d60b6-387f-4e18-9332-60acade1e93c","Type":"ContainerStarted","Data":"f6f61bb53a69fbd7b76fa915855c93b97727c50297665a64f7e08c3397ec5a92"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.055194 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-c7ntw" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.097916 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.097997 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" event={"ID":"7db5f72b-6a3e-4a3d-96bd-3e10756b605c","Type":"ContainerStarted","Data":"dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.099233 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.100977 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.600948464 +0000 UTC m=+141.645167143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.101693 4580 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8sbrm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.101730 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" podUID="7db5f72b-6a3e-4a3d-96bd-3e10756b605c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.120811 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-262n7" podStartSLOduration=122.120794195 podStartE2EDuration="2m2.120794195s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.117912239 +0000 UTC m=+141.162130930" watchObservedRunningTime="2026-01-12 13:09:02.120794195 +0000 UTC m=+141.165012886" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.127530 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" event={"ID":"f3417afd-f5ef-4c91-990f-22c8a77f2713","Type":"ContainerStarted","Data":"fe6b1daaf2d4c3497cdc85f4e9eafe7e76a29df54c9f18045eb93bba7e1cd309"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.144896 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" event={"ID":"46cfce88-b8c3-48f9-a957-c6eb80c59166","Type":"ContainerStarted","Data":"bd3983874bc8523b936193233c5c6b49df5a13ea81585399eb05979395b3d9e2"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.150655 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" event={"ID":"dafdf187-36fd-4d32-b188-07a5cd4474a9","Type":"ContainerStarted","Data":"de62ab649ab8ffa95d733f6616de639c7000fa4387d96541614e56ee3253c904"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.150685 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" event={"ID":"dafdf187-36fd-4d32-b188-07a5cd4474a9","Type":"ContainerStarted","Data":"a6bcd9663ab40d2ffe93d55f91432f8ec04b3f5417a08ca0c7d1c1a6b196d729"} Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.153073 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-27x9v"] Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.153207 4580 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hzdj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.153236 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hzdj" podUID="2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.154515 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.155124 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pv5tk" podStartSLOduration=122.15509239 podStartE2EDuration="2m2.15509239s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.148965688 +0000 UTC m=+141.193184368" watchObservedRunningTime="2026-01-12 13:09:02.15509239 +0000 UTC m=+141.199311080" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.156746 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.175881 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pk8kj" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.203431 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.203716 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.703703638 +0000 UTC m=+141.747922327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.213442 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27x9v"] Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.222357 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.304625 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.304723 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.804711147 +0000 UTC m=+141.848929827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.310720 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-utilities\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.310819 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.310876 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-catalog-content\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.310908 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sjh\" (UniqueName: \"kubernetes.io/projected/86490260-47c3-47d2-beca-c61e661882ca-kube-api-access-45sjh\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.313949 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.813936272 +0000 UTC m=+141.858154953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.319230 4580 patch_prober.go:28] interesting pod/router-default-5444994796-phs5z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 12 13:09:02 crc kubenswrapper[4580]: [-]has-synced failed: reason withheld Jan 12 13:09:02 crc kubenswrapper[4580]: [+]process-running ok Jan 12 13:09:02 crc kubenswrapper[4580]: healthz check failed Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.319284 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-phs5z" podUID="f0bdcb8e-d435-41c5-a140-1b17752fa7ec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.331987 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch5j5" podStartSLOduration=122.331967846 podStartE2EDuration="2m2.331967846s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.249900022 +0000 UTC m=+141.294118711" watchObservedRunningTime="2026-01-12 13:09:02.331967846 +0000 UTC m=+141.376186536" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.348462 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-twpq4" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.415186 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.415790 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-utilities\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.415850 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-catalog-content\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.415874 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45sjh\" (UniqueName: \"kubernetes.io/projected/86490260-47c3-47d2-beca-c61e661882ca-kube-api-access-45sjh\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.416461 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:02.916441808 +0000 UTC m=+141.960660499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.416863 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-utilities\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.417118 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-catalog-content\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.420808 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbkw4"] Jan 12 13:09:02 crc kubenswrapper[4580]: W0112 13:09:02.471607 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fb0ae3e_224b_4dba_8e3d_783df7049f05.slice/crio-08717937b198e2faa13c1edf55905e03895d8f396a82ef452d25eeb963b73c12 WatchSource:0}: Error finding container 08717937b198e2faa13c1edf55905e03895d8f396a82ef452d25eeb963b73c12: Status 404 returned error can't find the container with id 08717937b198e2faa13c1edf55905e03895d8f396a82ef452d25eeb963b73c12 Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.473915 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sjh\" (UniqueName: \"kubernetes.io/projected/86490260-47c3-47d2-beca-c61e661882ca-kube-api-access-45sjh\") pod \"redhat-operators-27x9v\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.493845 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jzjkt" podStartSLOduration=123.493815032 podStartE2EDuration="2m3.493815032s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.493763155 +0000 UTC m=+141.537981835" watchObservedRunningTime="2026-01-12 13:09:02.493815032 +0000 UTC m=+141.538033722" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.497277 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.520441 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.520769 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.020757904 +0000 UTC m=+142.064976594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.543219 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rs6cr" podStartSLOduration=122.543204938 podStartE2EDuration="2m2.543204938s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.541854338 +0000 UTC m=+141.586073028" watchObservedRunningTime="2026-01-12 13:09:02.543204938 +0000 UTC m=+141.587423628" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.559437 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lgw9g"] Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.560544 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.596450 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lgw9g"] Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.621998 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.622131 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.122117174 +0000 UTC m=+142.166335863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.622334 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-utilities\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.622406 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-catalog-content\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.622618 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.622867 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.122858862 +0000 UTC m=+142.167077552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.635794 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckj6"] Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.650898 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-twpq4" podStartSLOduration=123.650874863 podStartE2EDuration="2m3.650874863s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.648461104 +0000 UTC m=+141.692679794" watchObservedRunningTime="2026-01-12 13:09:02.650874863 +0000 UTC m=+141.695093553" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.730835 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.731348 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-utilities\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.731411 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-catalog-content\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.731526 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frjzs\" (UniqueName: \"kubernetes.io/projected/baa87674-7b1c-4327-92b5-fe9ebaac18f5-kube-api-access-frjzs\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.733827 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-catalog-content\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.738594 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-utilities\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.739214 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.239193223 +0000 UTC m=+142.283411914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.741726 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" podStartSLOduration=122.74171201 podStartE2EDuration="2m2.74171201s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.718021458 +0000 UTC m=+141.762240148" watchObservedRunningTime="2026-01-12 13:09:02.74171201 +0000 UTC m=+141.785930700" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.766359 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-72pwr" podStartSLOduration=122.766338965 podStartE2EDuration="2m2.766338965s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.738795358 +0000 UTC m=+141.783014049" watchObservedRunningTime="2026-01-12 13:09:02.766338965 +0000 UTC m=+141.810557644" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.788245 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nzcxb" podStartSLOduration=122.788221892 podStartE2EDuration="2m2.788221892s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.785718463 +0000 UTC m=+141.829937153" watchObservedRunningTime="2026-01-12 13:09:02.788221892 +0000 UTC m=+141.832440581" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.835606 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frjzs\" (UniqueName: \"kubernetes.io/projected/baa87674-7b1c-4327-92b5-fe9ebaac18f5-kube-api-access-frjzs\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.835704 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.835936 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.33592598 +0000 UTC m=+142.380144669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.855381 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frjzs\" (UniqueName: \"kubernetes.io/projected/baa87674-7b1c-4327-92b5-fe9ebaac18f5-kube-api-access-frjzs\") pod \"redhat-operators-lgw9g\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.862362 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2zrh8" podStartSLOduration=122.862345581 podStartE2EDuration="2m2.862345581s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.807376299 +0000 UTC m=+141.851594989" watchObservedRunningTime="2026-01-12 13:09:02.862345581 +0000 UTC m=+141.906564272" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.863873 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c7ntw" podStartSLOduration=8.863868353 podStartE2EDuration="8.863868353s" podCreationTimestamp="2026-01-12 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:02.861576902 +0000 UTC m=+141.905795592" watchObservedRunningTime="2026-01-12 13:09:02.863868353 +0000 UTC m=+141.908087043" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.937172 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.937288 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.437268688 +0000 UTC m=+142.481487378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.937544 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:02 crc kubenswrapper[4580]: E0112 13:09:02.937829 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.437822104 +0000 UTC m=+142.482040794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.974377 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:02 crc kubenswrapper[4580]: I0112 13:09:02.997498 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27x9v"] Jan 12 13:09:03 crc kubenswrapper[4580]: W0112 13:09:03.026011 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86490260_47c3_47d2_beca_c61e661882ca.slice/crio-942357b648f305e36a9a6ca4b04792356ce2b64f099c2f3aa5c2b85e2d3e5fd0 WatchSource:0}: Error finding container 942357b648f305e36a9a6ca4b04792356ce2b64f099c2f3aa5c2b85e2d3e5fd0: Status 404 returned error can't find the container with id 942357b648f305e36a9a6ca4b04792356ce2b64f099c2f3aa5c2b85e2d3e5fd0 Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.038638 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:03 crc kubenswrapper[4580]: E0112 13:09:03.038778 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.538739786 +0000 UTC m=+142.582958476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.039304 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:03 crc kubenswrapper[4580]: E0112 13:09:03.039737 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.539727807 +0000 UTC m=+142.583946497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.141551 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:03 crc kubenswrapper[4580]: E0112 13:09:03.160592 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.660561714 +0000 UTC m=+142.704780403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.207500 4580 generic.go:334] "Generic (PLEG): container finished" podID="037a95c2-1119-4fd8-8499-682fba2f03ea" containerID="429218504b15d49c71ec491ade7f77e78c38b8310607579b22cd67a199946598" exitCode=0 Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.207565 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" event={"ID":"037a95c2-1119-4fd8-8499-682fba2f03ea","Type":"ContainerDied","Data":"429218504b15d49c71ec491ade7f77e78c38b8310607579b22cd67a199946598"} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.210408 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" event={"ID":"405ad898-9997-4efd-b8a8-f878c39784b5","Type":"ContainerStarted","Data":"cd78c7ab35ecddfbfefe23cf450ca7b751ddcd47a28712ef03134c55424ef2be"} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.210470 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" event={"ID":"405ad898-9997-4efd-b8a8-f878c39784b5","Type":"ContainerStarted","Data":"ed9aa7985f2cc4f5cee8b8ef8b282b7fd69bba8a90e53d2e9b42ef43499fdea9"} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.213440 4580 generic.go:334] "Generic (PLEG): container finished" podID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerID="0985df62124517b54fa54c1aecf0ed97bb6b2efcf7d5fdcfd0292831afc5e65a" exitCode=0 Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.213527 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbkw4" event={"ID":"0fb0ae3e-224b-4dba-8e3d-783df7049f05","Type":"ContainerDied","Data":"0985df62124517b54fa54c1aecf0ed97bb6b2efcf7d5fdcfd0292831afc5e65a"} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.213549 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbkw4" event={"ID":"0fb0ae3e-224b-4dba-8e3d-783df7049f05","Type":"ContainerStarted","Data":"08717937b198e2faa13c1edf55905e03895d8f396a82ef452d25eeb963b73c12"} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.216792 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27x9v" event={"ID":"86490260-47c3-47d2-beca-c61e661882ca","Type":"ContainerStarted","Data":"942357b648f305e36a9a6ca4b04792356ce2b64f099c2f3aa5c2b85e2d3e5fd0"} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.252754 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:03 crc kubenswrapper[4580]: E0112 13:09:03.256601 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.756586022 +0000 UTC m=+142.800804712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hxkcl" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.258472 4580 generic.go:334] "Generic (PLEG): container finished" podID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerID="1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5" exitCode=0 Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.259844 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckj6" event={"ID":"dcee830f-a8da-4e16-95ca-fdaa8dbd86df","Type":"ContainerDied","Data":"1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5"} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.259879 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckj6" event={"ID":"dcee830f-a8da-4e16-95ca-fdaa8dbd86df","Type":"ContainerStarted","Data":"ab655ae8487e4dbeea9e4261fced9b7e1290d93985d0bc3f890324209b6e9f8b"} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.261731 4580 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hzdj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.261764 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hzdj" podUID="2b783eb7-ca7b-41db-8342-bfdd6fdfb9b1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.276527 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.301290 4580 patch_prober.go:28] interesting pod/router-default-5444994796-phs5z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 12 13:09:03 crc kubenswrapper[4580]: [-]has-synced failed: reason withheld Jan 12 13:09:03 crc kubenswrapper[4580]: [+]process-running ok Jan 12 13:09:03 crc kubenswrapper[4580]: healthz check failed Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.301535 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-phs5z" podUID="f0bdcb8e-d435-41c5-a140-1b17752fa7ec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.338213 4580 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.356853 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:03 crc kubenswrapper[4580]: E0112 13:09:03.357288 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-12 13:09:03.857270499 +0000 UTC m=+142.901489188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.429654 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.430296 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lgw9g"] Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.430323 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.430392 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.432377 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.433186 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.433326 4580 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-12T13:09:03.338317809Z","Handler":null,"Name":""} Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.463635 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.515606 4580 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.515651 4580 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.542325 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vpzdt" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.543189 4580 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.543237 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.568513 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.568588 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.645298 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hxkcl\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.670432 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.670893 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.670978 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.671046 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.702689 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.720696 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.775893 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:03 crc kubenswrapper[4580]: I0112 13:09:03.817176 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.277578 4580 generic.go:334] "Generic (PLEG): container finished" podID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerID="1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d" exitCode=0 Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.277972 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgw9g" event={"ID":"baa87674-7b1c-4327-92b5-fe9ebaac18f5","Type":"ContainerDied","Data":"1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d"} Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.278209 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgw9g" event={"ID":"baa87674-7b1c-4327-92b5-fe9ebaac18f5","Type":"ContainerStarted","Data":"d1306539488c6db07392d542eede6377abc79f8488af0356eb8d74bb73482d73"} Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.296811 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxkcl"] Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.298901 4580 patch_prober.go:28] interesting pod/router-default-5444994796-phs5z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 12 13:09:04 crc kubenswrapper[4580]: [-]has-synced failed: reason withheld Jan 12 13:09:04 crc kubenswrapper[4580]: [+]process-running ok Jan 12 13:09:04 crc kubenswrapper[4580]: healthz check failed Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.298945 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-phs5z" podUID="f0bdcb8e-d435-41c5-a140-1b17752fa7ec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.308467 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" event={"ID":"405ad898-9997-4efd-b8a8-f878c39784b5","Type":"ContainerStarted","Data":"0ee7fa2a059a118d7e1f1d1b40fcf10fa40946f8c3656cba0c69ec494830597b"} Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.314589 4580 generic.go:334] "Generic (PLEG): container finished" podID="86490260-47c3-47d2-beca-c61e661882ca" containerID="37162311171ec70deba7472701769b25ac30bf8be23909920c28ac7b0cc39579" exitCode=0 Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.314866 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27x9v" event={"ID":"86490260-47c3-47d2-beca-c61e661882ca","Type":"ContainerDied","Data":"37162311171ec70deba7472701769b25ac30bf8be23909920c28ac7b0cc39579"} Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.464321 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.657793 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.788585 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037a95c2-1119-4fd8-8499-682fba2f03ea-config-volume\") pod \"037a95c2-1119-4fd8-8499-682fba2f03ea\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.788687 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037a95c2-1119-4fd8-8499-682fba2f03ea-secret-volume\") pod \"037a95c2-1119-4fd8-8499-682fba2f03ea\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.788752 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24qsq\" (UniqueName: \"kubernetes.io/projected/037a95c2-1119-4fd8-8499-682fba2f03ea-kube-api-access-24qsq\") pod \"037a95c2-1119-4fd8-8499-682fba2f03ea\" (UID: \"037a95c2-1119-4fd8-8499-682fba2f03ea\") " Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.789531 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037a95c2-1119-4fd8-8499-682fba2f03ea-config-volume" (OuterVolumeSpecName: "config-volume") pod "037a95c2-1119-4fd8-8499-682fba2f03ea" (UID: "037a95c2-1119-4fd8-8499-682fba2f03ea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.801610 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037a95c2-1119-4fd8-8499-682fba2f03ea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "037a95c2-1119-4fd8-8499-682fba2f03ea" (UID: "037a95c2-1119-4fd8-8499-682fba2f03ea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.811910 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037a95c2-1119-4fd8-8499-682fba2f03ea-kube-api-access-24qsq" (OuterVolumeSpecName: "kube-api-access-24qsq") pod "037a95c2-1119-4fd8-8499-682fba2f03ea" (UID: "037a95c2-1119-4fd8-8499-682fba2f03ea"). InnerVolumeSpecName "kube-api-access-24qsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.890464 4580 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037a95c2-1119-4fd8-8499-682fba2f03ea-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.890505 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24qsq\" (UniqueName: \"kubernetes.io/projected/037a95c2-1119-4fd8-8499-682fba2f03ea-kube-api-access-24qsq\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:04 crc kubenswrapper[4580]: I0112 13:09:04.890515 4580 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037a95c2-1119-4fd8-8499-682fba2f03ea-config-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.291777 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.298073 4580 patch_prober.go:28] interesting pod/router-default-5444994796-phs5z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 12 13:09:05 crc kubenswrapper[4580]: [-]has-synced failed: reason withheld Jan 12 13:09:05 crc kubenswrapper[4580]: [+]process-running ok Jan 12 13:09:05 crc kubenswrapper[4580]: healthz check failed Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.298165 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-phs5z" podUID="f0bdcb8e-d435-41c5-a140-1b17752fa7ec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.339324 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" event={"ID":"cd2ced26-b320-44a3-aa98-457376b3d8c8","Type":"ContainerStarted","Data":"090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0"} Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.339374 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" event={"ID":"cd2ced26-b320-44a3-aa98-457376b3d8c8","Type":"ContainerStarted","Data":"3f4384b8b149ea29b9f07d99c24a77c90f4e8950807fc0b58fbdef509d95a1df"} Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.339467 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.343112 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ba1d2d80-ce4b-475b-9405-955ccd5b12a2","Type":"ContainerStarted","Data":"b7ff4e8ed8dc6ad278dc4562c8cf968bad0609eefaa73cdff3a31b5da3651b33"} Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.343154 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ba1d2d80-ce4b-475b-9405-955ccd5b12a2","Type":"ContainerStarted","Data":"b104a5ddb025a148bc279b513575280d37c6426c7d326992135ffcb0b86272ac"} Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.347002 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.347206 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7" event={"ID":"037a95c2-1119-4fd8-8499-682fba2f03ea","Type":"ContainerDied","Data":"1c3b02a8b21c373c86a3f3bfe8f156b7cff0aec994eb9028f385cc4cd5536202"} Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.347248 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3b02a8b21c373c86a3f3bfe8f156b7cff0aec994eb9028f385cc4cd5536202" Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.353297 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" event={"ID":"405ad898-9997-4efd-b8a8-f878c39784b5","Type":"ContainerStarted","Data":"c4b8876d3ee4d0d1812dc0f0b0ea9d965ea90c82e595beb78e5cb96df2aa731b"} Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.363298 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" podStartSLOduration=125.363284502 podStartE2EDuration="2m5.363284502s" podCreationTimestamp="2026-01-12 13:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:05.357066299 +0000 UTC m=+144.401284989" watchObservedRunningTime="2026-01-12 13:09:05.363284502 +0000 UTC m=+144.407503193" Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.381291 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.381230637 podStartE2EDuration="2.381230637s" podCreationTimestamp="2026-01-12 13:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:05.373134487 +0000 UTC m=+144.417353176" watchObservedRunningTime="2026-01-12 13:09:05.381230637 +0000 UTC m=+144.425449327" Jan 12 13:09:05 crc kubenswrapper[4580]: I0112 13:09:05.392533 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zdvz7" podStartSLOduration=11.392522131 podStartE2EDuration="11.392522131s" podCreationTimestamp="2026-01-12 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:05.389984569 +0000 UTC m=+144.434203259" watchObservedRunningTime="2026-01-12 13:09:05.392522131 +0000 UTC m=+144.436740821" Jan 12 13:09:06 crc kubenswrapper[4580]: I0112 13:09:06.296749 4580 patch_prober.go:28] interesting pod/router-default-5444994796-phs5z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 12 13:09:06 crc kubenswrapper[4580]: [-]has-synced failed: reason withheld Jan 12 13:09:06 crc kubenswrapper[4580]: [+]process-running ok Jan 12 13:09:06 crc kubenswrapper[4580]: healthz check failed Jan 12 13:09:06 crc kubenswrapper[4580]: I0112 13:09:06.296802 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-phs5z" podUID="f0bdcb8e-d435-41c5-a140-1b17752fa7ec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:06 crc kubenswrapper[4580]: I0112 13:09:06.364161 4580 generic.go:334] "Generic (PLEG): container finished" podID="ba1d2d80-ce4b-475b-9405-955ccd5b12a2" containerID="b7ff4e8ed8dc6ad278dc4562c8cf968bad0609eefaa73cdff3a31b5da3651b33" exitCode=0 Jan 12 13:09:06 crc kubenswrapper[4580]: I0112 13:09:06.364223 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ba1d2d80-ce4b-475b-9405-955ccd5b12a2","Type":"ContainerDied","Data":"b7ff4e8ed8dc6ad278dc4562c8cf968bad0609eefaa73cdff3a31b5da3651b33"} Jan 12 13:09:06 crc kubenswrapper[4580]: I0112 13:09:06.543588 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:09:06 crc kubenswrapper[4580]: I0112 13:09:06.547877 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mw8xc" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.152794 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.152835 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.152898 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.152921 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.153734 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.158901 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.159729 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.159924 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.293704 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.297608 4580 patch_prober.go:28] interesting pod/router-default-5444994796-phs5z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 12 13:09:07 crc kubenswrapper[4580]: [-]has-synced failed: reason withheld Jan 12 13:09:07 crc kubenswrapper[4580]: [+]process-running ok Jan 12 13:09:07 crc kubenswrapper[4580]: healthz check failed Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.297648 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-phs5z" podUID="f0bdcb8e-d435-41c5-a140-1b17752fa7ec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.397306 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.397627 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.753911 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:07 crc kubenswrapper[4580]: W0112 13:09:07.791137 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-d0dbfc49f70933f07da4d0acc0cf6973f9977f70a06df73ca698a7efeb201de0 WatchSource:0}: Error finding container d0dbfc49f70933f07da4d0acc0cf6973f9977f70a06df73ca698a7efeb201de0: Status 404 returned error can't find the container with id d0dbfc49f70933f07da4d0acc0cf6973f9977f70a06df73ca698a7efeb201de0 Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.861022 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kubelet-dir\") pod \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\" (UID: \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\") " Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.861071 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kube-api-access\") pod \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\" (UID: \"ba1d2d80-ce4b-475b-9405-955ccd5b12a2\") " Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.861086 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ba1d2d80-ce4b-475b-9405-955ccd5b12a2" (UID: "ba1d2d80-ce4b-475b-9405-955ccd5b12a2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.861386 4580 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.869909 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ba1d2d80-ce4b-475b-9405-955ccd5b12a2" (UID: "ba1d2d80-ce4b-475b-9405-955ccd5b12a2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:07 crc kubenswrapper[4580]: W0112 13:09:07.897718 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-2c506771078cd4cf269ba8a46671dc654e549181525dadf2a41a639d59978b3b WatchSource:0}: Error finding container 2c506771078cd4cf269ba8a46671dc654e549181525dadf2a41a639d59978b3b: Status 404 returned error can't find the container with id 2c506771078cd4cf269ba8a46671dc654e549181525dadf2a41a639d59978b3b Jan 12 13:09:07 crc kubenswrapper[4580]: I0112 13:09:07.963683 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba1d2d80-ce4b-475b-9405-955ccd5b12a2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.216665 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.216738 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.219841 4580 patch_prober.go:28] interesting pod/console-f9d7485db-5tdwv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.220334 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5tdwv" podUID="90150eba-9b4f-485f-97c3-89d410cb5851" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.278696 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2hzdj" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.298166 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.302794 4580 patch_prober.go:28] interesting pod/router-default-5444994796-phs5z container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 12 13:09:08 crc kubenswrapper[4580]: [-]has-synced failed: reason withheld Jan 12 13:09:08 crc kubenswrapper[4580]: [+]process-running ok Jan 12 13:09:08 crc kubenswrapper[4580]: healthz check failed Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.302846 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-phs5z" podUID="f0bdcb8e-d435-41c5-a140-1b17752fa7ec" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.406184 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.406193 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ba1d2d80-ce4b-475b-9405-955ccd5b12a2","Type":"ContainerDied","Data":"b104a5ddb025a148bc279b513575280d37c6426c7d326992135ffcb0b86272ac"} Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.406611 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b104a5ddb025a148bc279b513575280d37c6426c7d326992135ffcb0b86272ac" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.410079 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"74f2048dff61fa90541615d3b25f2dfc78db3f34acf782241c02021be9069d72"} Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.410206 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d0dbfc49f70933f07da4d0acc0cf6973f9977f70a06df73ca698a7efeb201de0"} Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.411310 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.416496 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"023bf1b932e83c644f4bf3faa999e48ea597fe717deb18df200f101e8e1336f3"} Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.416535 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2c506771078cd4cf269ba8a46671dc654e549181525dadf2a41a639d59978b3b"} Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.419267 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"980fdf3566cb7ca39d29709a900c161870d3352078bcf12899640b9433e29fdf"} Jan 12 13:09:08 crc kubenswrapper[4580]: I0112 13:09:08.419292 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b9741020ed4b6deca3ac9ade624c6f78e5c02194389b28006f7b85634bf228db"} Jan 12 13:09:09 crc kubenswrapper[4580]: I0112 13:09:09.317760 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:09:09 crc kubenswrapper[4580]: I0112 13:09:09.327743 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-phs5z" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.096895 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c7ntw" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.555097 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 12 13:09:10 crc kubenswrapper[4580]: E0112 13:09:10.558226 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1d2d80-ce4b-475b-9405-955ccd5b12a2" containerName="pruner" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.558249 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1d2d80-ce4b-475b-9405-955ccd5b12a2" containerName="pruner" Jan 12 13:09:10 crc kubenswrapper[4580]: E0112 13:09:10.558260 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037a95c2-1119-4fd8-8499-682fba2f03ea" containerName="collect-profiles" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.558266 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="037a95c2-1119-4fd8-8499-682fba2f03ea" containerName="collect-profiles" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.558362 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1d2d80-ce4b-475b-9405-955ccd5b12a2" containerName="pruner" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.558372 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="037a95c2-1119-4fd8-8499-682fba2f03ea" containerName="collect-profiles" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.559608 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.566301 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.567264 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.570648 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.611836 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.612226 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.716016 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.716174 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.716308 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.744283 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:10 crc kubenswrapper[4580]: I0112 13:09:10.881848 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:13 crc kubenswrapper[4580]: I0112 13:09:13.761365 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 12 13:09:14 crc kubenswrapper[4580]: I0112 13:09:14.491035 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb","Type":"ContainerStarted","Data":"167daa64889891c637780338c33e689608bd991d93d5d294184ac39694229484"} Jan 12 13:09:14 crc kubenswrapper[4580]: I0112 13:09:14.491390 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb","Type":"ContainerStarted","Data":"e0db59bb698a8b6436f265bf4d842e924ef0b14e61393bfdf3fe7cdcad4525c2"} Jan 12 13:09:14 crc kubenswrapper[4580]: I0112 13:09:14.506606 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.506586582 podStartE2EDuration="4.506586582s" podCreationTimestamp="2026-01-12 13:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:14.504896948 +0000 UTC m=+153.549115638" watchObservedRunningTime="2026-01-12 13:09:14.506586582 +0000 UTC m=+153.550805272" Jan 12 13:09:15 crc kubenswrapper[4580]: I0112 13:09:15.496851 4580 generic.go:334] "Generic (PLEG): container finished" podID="ed9c5a09-3a26-4668-98a4-37b6f8df9aeb" containerID="167daa64889891c637780338c33e689608bd991d93d5d294184ac39694229484" exitCode=0 Jan 12 13:09:15 crc kubenswrapper[4580]: I0112 13:09:15.496895 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb","Type":"ContainerDied","Data":"167daa64889891c637780338c33e689608bd991d93d5d294184ac39694229484"} Jan 12 13:09:16 crc kubenswrapper[4580]: I0112 13:09:16.949834 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:09:16 crc kubenswrapper[4580]: I0112 13:09:16.950229 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.129263 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cbltx"] Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.129729 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" podUID="73c37e67-6b89-4830-8723-f6716badcaa4" containerName="controller-manager" containerID="cri-o://2a8dfc4d1c0473219ffec89e724fe4978621452704d470978627e6bd00bc21ee" gracePeriod=30 Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.155802 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47"] Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.156065 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" podUID="e3a22133-fac4-42ba-9967-974e82a855aa" containerName="route-controller-manager" containerID="cri-o://c081e543a2775ece57b9cdf5e30f971e0a483c8f628ee1a88daabaabf4c4bc09" gracePeriod=30 Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.216153 4580 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cbltx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.216208 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" podUID="73c37e67-6b89-4830-8723-f6716badcaa4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.223522 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.228649 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.511583 4580 generic.go:334] "Generic (PLEG): container finished" podID="73c37e67-6b89-4830-8723-f6716badcaa4" containerID="2a8dfc4d1c0473219ffec89e724fe4978621452704d470978627e6bd00bc21ee" exitCode=0 Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.511785 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" event={"ID":"73c37e67-6b89-4830-8723-f6716badcaa4","Type":"ContainerDied","Data":"2a8dfc4d1c0473219ffec89e724fe4978621452704d470978627e6bd00bc21ee"} Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.513399 4580 generic.go:334] "Generic (PLEG): container finished" podID="e3a22133-fac4-42ba-9967-974e82a855aa" containerID="c081e543a2775ece57b9cdf5e30f971e0a483c8f628ee1a88daabaabf4c4bc09" exitCode=0 Jan 12 13:09:18 crc kubenswrapper[4580]: I0112 13:09:18.513974 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" event={"ID":"e3a22133-fac4-42ba-9967-974e82a855aa","Type":"ContainerDied","Data":"c081e543a2775ece57b9cdf5e30f971e0a483c8f628ee1a88daabaabf4c4bc09"} Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.050497 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.155609 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kubelet-dir\") pod \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\" (UID: \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\") " Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.155739 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kube-api-access\") pod \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\" (UID: \"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb\") " Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.156396 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ed9c5a09-3a26-4668-98a4-37b6f8df9aeb" (UID: "ed9c5a09-3a26-4668-98a4-37b6f8df9aeb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.162682 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ed9c5a09-3a26-4668-98a4-37b6f8df9aeb" (UID: "ed9c5a09-3a26-4668-98a4-37b6f8df9aeb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.257613 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.257646 4580 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed9c5a09-3a26-4668-98a4-37b6f8df9aeb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.522412 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed9c5a09-3a26-4668-98a4-37b6f8df9aeb","Type":"ContainerDied","Data":"e0db59bb698a8b6436f265bf4d842e924ef0b14e61393bfdf3fe7cdcad4525c2"} Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.522440 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 12 13:09:19 crc kubenswrapper[4580]: I0112 13:09:19.522463 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0db59bb698a8b6436f265bf4d842e924ef0b14e61393bfdf3fe7cdcad4525c2" Jan 12 13:09:20 crc kubenswrapper[4580]: I0112 13:09:20.975862 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:09:20 crc kubenswrapper[4580]: I0112 13:09:20.981046 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5066d8fa-2cee-4764-a817-b819d3876638-metrics-certs\") pod \"network-metrics-daemon-jw27h\" (UID: \"5066d8fa-2cee-4764-a817-b819d3876638\") " pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:09:21 crc kubenswrapper[4580]: I0112 13:09:21.099757 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jw27h" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.263981 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.267272 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.302895 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3a22133-fac4-42ba-9967-974e82a855aa-serving-cert\") pod \"e3a22133-fac4-42ba-9967-974e82a855aa\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.303261 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-client-ca\") pod \"e3a22133-fac4-42ba-9967-974e82a855aa\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305067 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-config\") pod \"73c37e67-6b89-4830-8723-f6716badcaa4\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305223 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "e3a22133-fac4-42ba-9967-974e82a855aa" (UID: "e3a22133-fac4-42ba-9967-974e82a855aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305464 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c37e67-6b89-4830-8723-f6716badcaa4-serving-cert\") pod \"73c37e67-6b89-4830-8723-f6716badcaa4\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305506 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-proxy-ca-bundles\") pod \"73c37e67-6b89-4830-8723-f6716badcaa4\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305536 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp8nw\" (UniqueName: \"kubernetes.io/projected/e3a22133-fac4-42ba-9967-974e82a855aa-kube-api-access-cp8nw\") pod \"e3a22133-fac4-42ba-9967-974e82a855aa\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305558 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxxm9\" (UniqueName: \"kubernetes.io/projected/73c37e67-6b89-4830-8723-f6716badcaa4-kube-api-access-gxxm9\") pod \"73c37e67-6b89-4830-8723-f6716badcaa4\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305584 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-config\") pod \"e3a22133-fac4-42ba-9967-974e82a855aa\" (UID: \"e3a22133-fac4-42ba-9967-974e82a855aa\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305627 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-client-ca\") pod \"73c37e67-6b89-4830-8723-f6716badcaa4\" (UID: \"73c37e67-6b89-4830-8723-f6716badcaa4\") " Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.305902 4580 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-client-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.306426 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-config" (OuterVolumeSpecName: "config") pod "73c37e67-6b89-4830-8723-f6716badcaa4" (UID: "73c37e67-6b89-4830-8723-f6716badcaa4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.306440 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "73c37e67-6b89-4830-8723-f6716badcaa4" (UID: "73c37e67-6b89-4830-8723-f6716badcaa4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.306611 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-config" (OuterVolumeSpecName: "config") pod "e3a22133-fac4-42ba-9967-974e82a855aa" (UID: "e3a22133-fac4-42ba-9967-974e82a855aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.308476 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-client-ca" (OuterVolumeSpecName: "client-ca") pod "73c37e67-6b89-4830-8723-f6716badcaa4" (UID: "73c37e67-6b89-4830-8723-f6716badcaa4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.310990 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c37e67-6b89-4830-8723-f6716badcaa4-kube-api-access-gxxm9" (OuterVolumeSpecName: "kube-api-access-gxxm9") pod "73c37e67-6b89-4830-8723-f6716badcaa4" (UID: "73c37e67-6b89-4830-8723-f6716badcaa4"). InnerVolumeSpecName "kube-api-access-gxxm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.311783 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a22133-fac4-42ba-9967-974e82a855aa-kube-api-access-cp8nw" (OuterVolumeSpecName: "kube-api-access-cp8nw") pod "e3a22133-fac4-42ba-9967-974e82a855aa" (UID: "e3a22133-fac4-42ba-9967-974e82a855aa"). InnerVolumeSpecName "kube-api-access-cp8nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.313875 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a22133-fac4-42ba-9967-974e82a855aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e3a22133-fac4-42ba-9967-974e82a855aa" (UID: "e3a22133-fac4-42ba-9967-974e82a855aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.314931 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c37e67-6b89-4830-8723-f6716badcaa4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "73c37e67-6b89-4830-8723-f6716badcaa4" (UID: "73c37e67-6b89-4830-8723-f6716badcaa4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.316172 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk"] Jan 12 13:09:23 crc kubenswrapper[4580]: E0112 13:09:23.316503 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c37e67-6b89-4830-8723-f6716badcaa4" containerName="controller-manager" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.316530 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c37e67-6b89-4830-8723-f6716badcaa4" containerName="controller-manager" Jan 12 13:09:23 crc kubenswrapper[4580]: E0112 13:09:23.316543 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9c5a09-3a26-4668-98a4-37b6f8df9aeb" containerName="pruner" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.316551 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9c5a09-3a26-4668-98a4-37b6f8df9aeb" containerName="pruner" Jan 12 13:09:23 crc kubenswrapper[4580]: E0112 13:09:23.316561 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a22133-fac4-42ba-9967-974e82a855aa" containerName="route-controller-manager" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.316568 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a22133-fac4-42ba-9967-974e82a855aa" containerName="route-controller-manager" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.316685 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9c5a09-3a26-4668-98a4-37b6f8df9aeb" containerName="pruner" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.316696 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a22133-fac4-42ba-9967-974e82a855aa" containerName="route-controller-manager" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.316703 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c37e67-6b89-4830-8723-f6716badcaa4" containerName="controller-manager" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.318795 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.319960 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk"] Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.406863 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-client-ca\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407063 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e9f5cf-4927-4b13-969a-d11472839c48-serving-cert\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407199 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8hm\" (UniqueName: \"kubernetes.io/projected/d4e9f5cf-4927-4b13-969a-d11472839c48-kube-api-access-kw8hm\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407281 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-config\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407403 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407474 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c37e67-6b89-4830-8723-f6716badcaa4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407526 4580 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407575 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp8nw\" (UniqueName: \"kubernetes.io/projected/e3a22133-fac4-42ba-9967-974e82a855aa-kube-api-access-cp8nw\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407622 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxxm9\" (UniqueName: \"kubernetes.io/projected/73c37e67-6b89-4830-8723-f6716badcaa4-kube-api-access-gxxm9\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407669 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3a22133-fac4-42ba-9967-974e82a855aa-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407739 4580 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c37e67-6b89-4830-8723-f6716badcaa4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.407797 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3a22133-fac4-42ba-9967-974e82a855aa-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.508551 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-client-ca\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.508619 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e9f5cf-4927-4b13-969a-d11472839c48-serving-cert\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.508676 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8hm\" (UniqueName: \"kubernetes.io/projected/d4e9f5cf-4927-4b13-969a-d11472839c48-kube-api-access-kw8hm\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.508701 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-config\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.509774 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-client-ca\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.510062 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-config\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.526648 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e9f5cf-4927-4b13-969a-d11472839c48-serving-cert\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.529009 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8hm\" (UniqueName: \"kubernetes.io/projected/d4e9f5cf-4927-4b13-969a-d11472839c48-kube-api-access-kw8hm\") pod \"route-controller-manager-9f876bbff-cdcmk\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.548707 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.548697 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47" event={"ID":"e3a22133-fac4-42ba-9967-974e82a855aa","Type":"ContainerDied","Data":"847437b89e4f90c8f8448ad4611ff6470e2370a2b52af232bcd8adc533fcbe6c"} Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.548799 4580 scope.go:117] "RemoveContainer" containerID="c081e543a2775ece57b9cdf5e30f971e0a483c8f628ee1a88daabaabf4c4bc09" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.554256 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" event={"ID":"73c37e67-6b89-4830-8723-f6716badcaa4","Type":"ContainerDied","Data":"608975d1379407e7ac6ff33943b3e60b89a9f1ecd6f6df40e6e71b2500749d08"} Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.554329 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cbltx" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.576308 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47"] Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.580059 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z6r47"] Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.585363 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cbltx"] Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.588678 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cbltx"] Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.649686 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:23 crc kubenswrapper[4580]: I0112 13:09:23.826999 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:09:24 crc kubenswrapper[4580]: I0112 13:09:24.620865 4580 scope.go:117] "RemoveContainer" containerID="2a8dfc4d1c0473219ffec89e724fe4978621452704d470978627e6bd00bc21ee" Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.040634 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk"] Jan 12 13:09:25 crc kubenswrapper[4580]: W0112 13:09:25.048591 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4e9f5cf_4927_4b13_969a_d11472839c48.slice/crio-a4f262f8c945bc37d83c3467f2c7f0d613871abdbf9542f73d985ed1563b29d3 WatchSource:0}: Error finding container a4f262f8c945bc37d83c3467f2c7f0d613871abdbf9542f73d985ed1563b29d3: Status 404 returned error can't find the container with id a4f262f8c945bc37d83c3467f2c7f0d613871abdbf9542f73d985ed1563b29d3 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.104514 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jw27h"] Jan 12 13:09:25 crc kubenswrapper[4580]: W0112 13:09:25.166628 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5066d8fa_2cee_4764_a817_b819d3876638.slice/crio-f2af0f9777b59718c5fbdf26d7e50821054b9720b35b5476783b6f17500c164b WatchSource:0}: Error finding container f2af0f9777b59718c5fbdf26d7e50821054b9720b35b5476783b6f17500c164b: Status 404 returned error can't find the container with id f2af0f9777b59718c5fbdf26d7e50821054b9720b35b5476783b6f17500c164b Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.286906 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c37e67-6b89-4830-8723-f6716badcaa4" path="/var/lib/kubelet/pods/73c37e67-6b89-4830-8723-f6716badcaa4/volumes" Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.287536 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a22133-fac4-42ba-9967-974e82a855aa" path="/var/lib/kubelet/pods/e3a22133-fac4-42ba-9967-974e82a855aa/volumes" Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.574157 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jw27h" event={"ID":"5066d8fa-2cee-4764-a817-b819d3876638","Type":"ContainerStarted","Data":"9a4bd808bf22ee507cfea23b09084707deb421354597717041ca9e6c02dff702"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.574210 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jw27h" event={"ID":"5066d8fa-2cee-4764-a817-b819d3876638","Type":"ContainerStarted","Data":"f2af0f9777b59718c5fbdf26d7e50821054b9720b35b5476783b6f17500c164b"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.576523 4580 generic.go:334] "Generic (PLEG): container finished" podID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerID="e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad" exitCode=0 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.576622 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckj6" event={"ID":"dcee830f-a8da-4e16-95ca-fdaa8dbd86df","Type":"ContainerDied","Data":"e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.578731 4580 generic.go:334] "Generic (PLEG): container finished" podID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerID="35dc66437144d6ef207df57ffee28160c2e9827877ae57c9271d90435a4efff8" exitCode=0 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.578778 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbkw4" event={"ID":"0fb0ae3e-224b-4dba-8e3d-783df7049f05","Type":"ContainerDied","Data":"35dc66437144d6ef207df57ffee28160c2e9827877ae57c9271d90435a4efff8"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.588773 4580 generic.go:334] "Generic (PLEG): container finished" podID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerID="4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d" exitCode=0 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.588812 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgw9g" event={"ID":"baa87674-7b1c-4327-92b5-fe9ebaac18f5","Type":"ContainerDied","Data":"4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.590969 4580 generic.go:334] "Generic (PLEG): container finished" podID="eb8c503e-0907-40aa-a053-72d38311b08e" containerID="7eee35cf648208fcb5060695b329def2341070d699c51d5ff1fe7cd0d7144498" exitCode=0 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.591016 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4gk" event={"ID":"eb8c503e-0907-40aa-a053-72d38311b08e","Type":"ContainerDied","Data":"7eee35cf648208fcb5060695b329def2341070d699c51d5ff1fe7cd0d7144498"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.596212 4580 generic.go:334] "Generic (PLEG): container finished" podID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerID="d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5" exitCode=0 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.596261 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqmq" event={"ID":"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064","Type":"ContainerDied","Data":"d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.605233 4580 generic.go:334] "Generic (PLEG): container finished" podID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerID="1a4b35aa22daf45c6354dfa9a5bcd4748cec8e68cd44e7b2415a0d1dc9566e94" exitCode=0 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.605293 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwn8" event={"ID":"adf8dbed-2e00-49c7-90f3-62f85ef5e078","Type":"ContainerDied","Data":"1a4b35aa22daf45c6354dfa9a5bcd4748cec8e68cd44e7b2415a0d1dc9566e94"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.606886 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" event={"ID":"d4e9f5cf-4927-4b13-969a-d11472839c48","Type":"ContainerStarted","Data":"dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.606921 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" event={"ID":"d4e9f5cf-4927-4b13-969a-d11472839c48","Type":"ContainerStarted","Data":"a4f262f8c945bc37d83c3467f2c7f0d613871abdbf9542f73d985ed1563b29d3"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.607416 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.614051 4580 generic.go:334] "Generic (PLEG): container finished" podID="86490260-47c3-47d2-beca-c61e661882ca" containerID="d189db6514b0838d1a80d278fbac64cbc8379dfd984f6b8e31b9598e680fe7e0" exitCode=0 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.614154 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27x9v" event={"ID":"86490260-47c3-47d2-beca-c61e661882ca","Type":"ContainerDied","Data":"d189db6514b0838d1a80d278fbac64cbc8379dfd984f6b8e31b9598e680fe7e0"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.620461 4580 generic.go:334] "Generic (PLEG): container finished" podID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerID="077825a94a013c7a66ae49697fa90d3d5cdfdeb9df58f5f69fa07b8d0d2e9338" exitCode=0 Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.620495 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r98fk" event={"ID":"10b26e31-b5d9-491a-863a-1cc0a102eae8","Type":"ContainerDied","Data":"077825a94a013c7a66ae49697fa90d3d5cdfdeb9df58f5f69fa07b8d0d2e9338"} Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.640946 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:25 crc kubenswrapper[4580]: I0112 13:09:25.702814 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" podStartSLOduration=7.702801094 podStartE2EDuration="7.702801094s" podCreationTimestamp="2026-01-12 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:25.701173496 +0000 UTC m=+164.745392186" watchObservedRunningTime="2026-01-12 13:09:25.702801094 +0000 UTC m=+164.747019784" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.629430 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqmq" event={"ID":"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064","Type":"ContainerStarted","Data":"2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.631583 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwn8" event={"ID":"adf8dbed-2e00-49c7-90f3-62f85ef5e078","Type":"ContainerStarted","Data":"86cc2fb65e3cf5dd3c293ac63deddd26ddeae3c90feb2c3447be654c2c335f59"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.633779 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbkw4" event={"ID":"0fb0ae3e-224b-4dba-8e3d-783df7049f05","Type":"ContainerStarted","Data":"3b698a7623bceb5141f2ce989da151b74a925fbec398291710f61b21d7fcc8a9"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.635636 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckj6" event={"ID":"dcee830f-a8da-4e16-95ca-fdaa8dbd86df","Type":"ContainerStarted","Data":"c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.637562 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgw9g" event={"ID":"baa87674-7b1c-4327-92b5-fe9ebaac18f5","Type":"ContainerStarted","Data":"2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.639553 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4gk" event={"ID":"eb8c503e-0907-40aa-a053-72d38311b08e","Type":"ContainerStarted","Data":"d1568fc031a9b4fb1a172f5e179c472c630fe35f7ea4b9f5de9760f78ffa00ce"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.641286 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r98fk" event={"ID":"10b26e31-b5d9-491a-863a-1cc0a102eae8","Type":"ContainerStarted","Data":"f4c3673086bf4a7cfae5c40af384963e637179db320a8e40b9bc0a1d303a0fc7"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.642837 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jw27h" event={"ID":"5066d8fa-2cee-4764-a817-b819d3876638","Type":"ContainerStarted","Data":"e84a8230447ea49850bcab18e0dbc85504190fceeb1bb1b564afc1f8a6f5dbe7"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.644664 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27x9v" event={"ID":"86490260-47c3-47d2-beca-c61e661882ca","Type":"ContainerStarted","Data":"e15ac3e18763526d9f2f0cb9a4613cd4641cef85563dc907f42c500fae17360d"} Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.649138 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjqmq" podStartSLOduration=3.5449452949999998 podStartE2EDuration="27.649127328s" podCreationTimestamp="2026-01-12 13:08:59 +0000 UTC" firstStartedPulling="2026-01-12 13:09:01.99197116 +0000 UTC m=+141.036189849" lastFinishedPulling="2026-01-12 13:09:26.096153192 +0000 UTC m=+165.140371882" observedRunningTime="2026-01-12 13:09:26.648301793 +0000 UTC m=+165.692520482" watchObservedRunningTime="2026-01-12 13:09:26.649127328 +0000 UTC m=+165.693346009" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.669995 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r98fk" podStartSLOduration=4.285099895 podStartE2EDuration="28.669979966s" podCreationTimestamp="2026-01-12 13:08:58 +0000 UTC" firstStartedPulling="2026-01-12 13:09:01.879607143 +0000 UTC m=+140.923825823" lastFinishedPulling="2026-01-12 13:09:26.264487205 +0000 UTC m=+165.308705894" observedRunningTime="2026-01-12 13:09:26.668019415 +0000 UTC m=+165.712238105" watchObservedRunningTime="2026-01-12 13:09:26.669979966 +0000 UTC m=+165.714198657" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.685590 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cckj6" podStartSLOduration=2.746674823 podStartE2EDuration="25.685571362s" podCreationTimestamp="2026-01-12 13:09:01 +0000 UTC" firstStartedPulling="2026-01-12 13:09:03.271016985 +0000 UTC m=+142.315235674" lastFinishedPulling="2026-01-12 13:09:26.209913523 +0000 UTC m=+165.254132213" observedRunningTime="2026-01-12 13:09:26.685315312 +0000 UTC m=+165.729534002" watchObservedRunningTime="2026-01-12 13:09:26.685571362 +0000 UTC m=+165.729790052" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.720786 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-27x9v" podStartSLOduration=2.809139354 podStartE2EDuration="24.720772228s" podCreationTimestamp="2026-01-12 13:09:02 +0000 UTC" firstStartedPulling="2026-01-12 13:09:04.332419815 +0000 UTC m=+143.376638505" lastFinishedPulling="2026-01-12 13:09:26.244052689 +0000 UTC m=+165.288271379" observedRunningTime="2026-01-12 13:09:26.704433642 +0000 UTC m=+165.748652332" watchObservedRunningTime="2026-01-12 13:09:26.720772228 +0000 UTC m=+165.764990918" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.738367 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2l4gk" podStartSLOduration=3.364790366 podStartE2EDuration="27.738352686s" podCreationTimestamp="2026-01-12 13:08:59 +0000 UTC" firstStartedPulling="2026-01-12 13:09:01.827992512 +0000 UTC m=+140.872211192" lastFinishedPulling="2026-01-12 13:09:26.201554821 +0000 UTC m=+165.245773512" observedRunningTime="2026-01-12 13:09:26.722334704 +0000 UTC m=+165.766553393" watchObservedRunningTime="2026-01-12 13:09:26.738352686 +0000 UTC m=+165.782571377" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.739199 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drwn8" podStartSLOduration=3.364216504 podStartE2EDuration="27.7391928s" podCreationTimestamp="2026-01-12 13:08:59 +0000 UTC" firstStartedPulling="2026-01-12 13:09:01.754939488 +0000 UTC m=+140.799158178" lastFinishedPulling="2026-01-12 13:09:26.129915784 +0000 UTC m=+165.174134474" observedRunningTime="2026-01-12 13:09:26.737788911 +0000 UTC m=+165.782007601" watchObservedRunningTime="2026-01-12 13:09:26.7391928 +0000 UTC m=+165.783411480" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.756148 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lgw9g" podStartSLOduration=2.95571909 podStartE2EDuration="24.756127662s" podCreationTimestamp="2026-01-12 13:09:02 +0000 UTC" firstStartedPulling="2026-01-12 13:09:04.286811482 +0000 UTC m=+143.331030172" lastFinishedPulling="2026-01-12 13:09:26.087220065 +0000 UTC m=+165.131438744" observedRunningTime="2026-01-12 13:09:26.75388369 +0000 UTC m=+165.798102380" watchObservedRunningTime="2026-01-12 13:09:26.756127662 +0000 UTC m=+165.800346351" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.775838 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbkw4" podStartSLOduration=2.894014403 podStartE2EDuration="25.775828051s" podCreationTimestamp="2026-01-12 13:09:01 +0000 UTC" firstStartedPulling="2026-01-12 13:09:03.215734206 +0000 UTC m=+142.259952896" lastFinishedPulling="2026-01-12 13:09:26.097547853 +0000 UTC m=+165.141766544" observedRunningTime="2026-01-12 13:09:26.773236608 +0000 UTC m=+165.817455299" watchObservedRunningTime="2026-01-12 13:09:26.775828051 +0000 UTC m=+165.820046741" Jan 12 13:09:26 crc kubenswrapper[4580]: I0112 13:09:26.789970 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jw27h" podStartSLOduration=147.789950065 podStartE2EDuration="2m27.789950065s" podCreationTimestamp="2026-01-12 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:26.78817433 +0000 UTC m=+165.832393021" watchObservedRunningTime="2026-01-12 13:09:26.789950065 +0000 UTC m=+165.834168755" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.657596 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75467d56bf-jlhpc"] Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.658368 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.660150 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.661618 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.661972 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.662171 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.663210 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-config\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.663257 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c881b720-5fb3-404c-9ccd-9509936f3451-serving-cert\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.663394 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfrcp\" (UniqueName: \"kubernetes.io/projected/c881b720-5fb3-404c-9ccd-9509936f3451-kube-api-access-qfrcp\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.663479 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-proxy-ca-bundles\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.663834 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-client-ca\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.669902 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.670332 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.673440 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.675486 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75467d56bf-jlhpc"] Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.765396 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-config\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.765508 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c881b720-5fb3-404c-9ccd-9509936f3451-serving-cert\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.765608 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfrcp\" (UniqueName: \"kubernetes.io/projected/c881b720-5fb3-404c-9ccd-9509936f3451-kube-api-access-qfrcp\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.765702 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-proxy-ca-bundles\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.765799 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-client-ca\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.766707 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-proxy-ca-bundles\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.766720 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-client-ca\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.766824 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-config\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.781858 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfrcp\" (UniqueName: \"kubernetes.io/projected/c881b720-5fb3-404c-9ccd-9509936f3451-kube-api-access-qfrcp\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.782577 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c881b720-5fb3-404c-9ccd-9509936f3451-serving-cert\") pod \"controller-manager-75467d56bf-jlhpc\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:27 crc kubenswrapper[4580]: I0112 13:09:27.971687 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:28 crc kubenswrapper[4580]: I0112 13:09:28.399467 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75467d56bf-jlhpc"] Jan 12 13:09:28 crc kubenswrapper[4580]: I0112 13:09:28.654554 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" event={"ID":"c881b720-5fb3-404c-9ccd-9509936f3451","Type":"ContainerStarted","Data":"78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783"} Jan 12 13:09:28 crc kubenswrapper[4580]: I0112 13:09:28.654601 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" event={"ID":"c881b720-5fb3-404c-9ccd-9509936f3451","Type":"ContainerStarted","Data":"717fc628d8a9d8afacbebf778a3c373a3fa884af2ad32db158e297b34d8c33bd"} Jan 12 13:09:28 crc kubenswrapper[4580]: I0112 13:09:28.654780 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:28 crc kubenswrapper[4580]: I0112 13:09:28.660351 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:28 crc kubenswrapper[4580]: I0112 13:09:28.671323 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" podStartSLOduration=10.671287037 podStartE2EDuration="10.671287037s" podCreationTimestamp="2026-01-12 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:28.668050576 +0000 UTC m=+167.712269266" watchObservedRunningTime="2026-01-12 13:09:28.671287037 +0000 UTC m=+167.715505726" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.486318 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.486730 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.563567 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.563901 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.601676 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.608832 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.859844 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.859912 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:09:29 crc kubenswrapper[4580]: I0112 13:09:29.895738 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:09:30 crc kubenswrapper[4580]: I0112 13:09:30.043065 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:09:30 crc kubenswrapper[4580]: I0112 13:09:30.043128 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:09:30 crc kubenswrapper[4580]: I0112 13:09:30.084614 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:09:31 crc kubenswrapper[4580]: I0112 13:09:31.675842 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:31 crc kubenswrapper[4580]: I0112 13:09:31.676133 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:31 crc kubenswrapper[4580]: I0112 13:09:31.711843 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:31 crc kubenswrapper[4580]: I0112 13:09:31.950842 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:31 crc kubenswrapper[4580]: I0112 13:09:31.951488 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:31 crc kubenswrapper[4580]: I0112 13:09:31.979345 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:32 crc kubenswrapper[4580]: I0112 13:09:32.498006 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:32 crc kubenswrapper[4580]: I0112 13:09:32.498046 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:32 crc kubenswrapper[4580]: I0112 13:09:32.527783 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:32 crc kubenswrapper[4580]: I0112 13:09:32.716397 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:09:32 crc kubenswrapper[4580]: I0112 13:09:32.717122 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:09:32 crc kubenswrapper[4580]: I0112 13:09:32.718038 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:32 crc kubenswrapper[4580]: I0112 13:09:32.975414 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:32 crc kubenswrapper[4580]: I0112 13:09:32.975782 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:33 crc kubenswrapper[4580]: I0112 13:09:33.007450 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:33 crc kubenswrapper[4580]: I0112 13:09:33.718003 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:35 crc kubenswrapper[4580]: I0112 13:09:35.811946 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckj6"] Jan 12 13:09:35 crc kubenswrapper[4580]: I0112 13:09:35.812501 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cckj6" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerName="registry-server" containerID="cri-o://c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149" gracePeriod=2 Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.014859 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lgw9g"] Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.015081 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lgw9g" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerName="registry-server" containerID="cri-o://2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831" gracePeriod=2 Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.207409 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.377754 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-utilities\") pod \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.377810 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-catalog-content\") pod \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.378116 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6w56\" (UniqueName: \"kubernetes.io/projected/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-kube-api-access-v6w56\") pod \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\" (UID: \"dcee830f-a8da-4e16-95ca-fdaa8dbd86df\") " Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.380480 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-utilities" (OuterVolumeSpecName: "utilities") pod "dcee830f-a8da-4e16-95ca-fdaa8dbd86df" (UID: "dcee830f-a8da-4e16-95ca-fdaa8dbd86df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.385843 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-kube-api-access-v6w56" (OuterVolumeSpecName: "kube-api-access-v6w56") pod "dcee830f-a8da-4e16-95ca-fdaa8dbd86df" (UID: "dcee830f-a8da-4e16-95ca-fdaa8dbd86df"). InnerVolumeSpecName "kube-api-access-v6w56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.396082 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.400454 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcee830f-a8da-4e16-95ca-fdaa8dbd86df" (UID: "dcee830f-a8da-4e16-95ca-fdaa8dbd86df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.479839 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.479879 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.479902 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6w56\" (UniqueName: \"kubernetes.io/projected/dcee830f-a8da-4e16-95ca-fdaa8dbd86df-kube-api-access-v6w56\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.581503 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-utilities\") pod \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.581669 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-catalog-content\") pod \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.582276 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-utilities" (OuterVolumeSpecName: "utilities") pod "baa87674-7b1c-4327-92b5-fe9ebaac18f5" (UID: "baa87674-7b1c-4327-92b5-fe9ebaac18f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.582522 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frjzs\" (UniqueName: \"kubernetes.io/projected/baa87674-7b1c-4327-92b5-fe9ebaac18f5-kube-api-access-frjzs\") pod \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\" (UID: \"baa87674-7b1c-4327-92b5-fe9ebaac18f5\") " Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.583200 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.586724 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa87674-7b1c-4327-92b5-fe9ebaac18f5-kube-api-access-frjzs" (OuterVolumeSpecName: "kube-api-access-frjzs") pod "baa87674-7b1c-4327-92b5-fe9ebaac18f5" (UID: "baa87674-7b1c-4327-92b5-fe9ebaac18f5"). InnerVolumeSpecName "kube-api-access-frjzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.672955 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "baa87674-7b1c-4327-92b5-fe9ebaac18f5" (UID: "baa87674-7b1c-4327-92b5-fe9ebaac18f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.684157 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa87674-7b1c-4327-92b5-fe9ebaac18f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.684192 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frjzs\" (UniqueName: \"kubernetes.io/projected/baa87674-7b1c-4327-92b5-fe9ebaac18f5-kube-api-access-frjzs\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.714049 4580 generic.go:334] "Generic (PLEG): container finished" podID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerID="2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831" exitCode=0 Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.714318 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgw9g" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.714384 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgw9g" event={"ID":"baa87674-7b1c-4327-92b5-fe9ebaac18f5","Type":"ContainerDied","Data":"2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831"} Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.714609 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgw9g" event={"ID":"baa87674-7b1c-4327-92b5-fe9ebaac18f5","Type":"ContainerDied","Data":"d1306539488c6db07392d542eede6377abc79f8488af0356eb8d74bb73482d73"} Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.714658 4580 scope.go:117] "RemoveContainer" containerID="2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.719378 4580 generic.go:334] "Generic (PLEG): container finished" podID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerID="c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149" exitCode=0 Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.719459 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckj6" event={"ID":"dcee830f-a8da-4e16-95ca-fdaa8dbd86df","Type":"ContainerDied","Data":"c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149"} Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.719498 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cckj6" event={"ID":"dcee830f-a8da-4e16-95ca-fdaa8dbd86df","Type":"ContainerDied","Data":"ab655ae8487e4dbeea9e4261fced9b7e1290d93985d0bc3f890324209b6e9f8b"} Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.719519 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cckj6" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.738066 4580 scope.go:117] "RemoveContainer" containerID="4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.752152 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lgw9g"] Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.759750 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lgw9g"] Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.763714 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckj6"] Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.766664 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cckj6"] Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.775594 4580 scope.go:117] "RemoveContainer" containerID="1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.797677 4580 scope.go:117] "RemoveContainer" containerID="2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831" Jan 12 13:09:36 crc kubenswrapper[4580]: E0112 13:09:36.798142 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831\": container with ID starting with 2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831 not found: ID does not exist" containerID="2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.798187 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831"} err="failed to get container status \"2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831\": rpc error: code = NotFound desc = could not find container \"2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831\": container with ID starting with 2c0a69d204c86c8166ccd2609f6c3b48cb73bfa6c72f9fbd0c5303efae852831 not found: ID does not exist" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.798235 4580 scope.go:117] "RemoveContainer" containerID="4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d" Jan 12 13:09:36 crc kubenswrapper[4580]: E0112 13:09:36.798537 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d\": container with ID starting with 4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d not found: ID does not exist" containerID="4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.798572 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d"} err="failed to get container status \"4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d\": rpc error: code = NotFound desc = could not find container \"4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d\": container with ID starting with 4abdb250dbe9baf2d1f02c64ca883cfeeea98c4fa0089c142348d9fe5c38ea2d not found: ID does not exist" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.798602 4580 scope.go:117] "RemoveContainer" containerID="1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d" Jan 12 13:09:36 crc kubenswrapper[4580]: E0112 13:09:36.798954 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d\": container with ID starting with 1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d not found: ID does not exist" containerID="1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.798981 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d"} err="failed to get container status \"1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d\": rpc error: code = NotFound desc = could not find container \"1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d\": container with ID starting with 1990c7eb0c64da703cbcd59e189ae7a9d7a692312369fc55015715b694e96b3d not found: ID does not exist" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.798998 4580 scope.go:117] "RemoveContainer" containerID="c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.815514 4580 scope.go:117] "RemoveContainer" containerID="e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.829688 4580 scope.go:117] "RemoveContainer" containerID="1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.847383 4580 scope.go:117] "RemoveContainer" containerID="c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149" Jan 12 13:09:36 crc kubenswrapper[4580]: E0112 13:09:36.847838 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149\": container with ID starting with c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149 not found: ID does not exist" containerID="c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.847881 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149"} err="failed to get container status \"c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149\": rpc error: code = NotFound desc = could not find container \"c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149\": container with ID starting with c243f077f580c33f9a74c5c8be37acc12c218ffce57f6087c1632ae309c6f149 not found: ID does not exist" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.847911 4580 scope.go:117] "RemoveContainer" containerID="e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad" Jan 12 13:09:36 crc kubenswrapper[4580]: E0112 13:09:36.848245 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad\": container with ID starting with e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad not found: ID does not exist" containerID="e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.848283 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad"} err="failed to get container status \"e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad\": rpc error: code = NotFound desc = could not find container \"e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad\": container with ID starting with e24b5a4d83c39e5b87b31f59f2f73fad460ed96d967887c76282cc81b0c3b2ad not found: ID does not exist" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.848305 4580 scope.go:117] "RemoveContainer" containerID="1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5" Jan 12 13:09:36 crc kubenswrapper[4580]: E0112 13:09:36.848568 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5\": container with ID starting with 1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5 not found: ID does not exist" containerID="1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5" Jan 12 13:09:36 crc kubenswrapper[4580]: I0112 13:09:36.848590 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5"} err="failed to get container status \"1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5\": rpc error: code = NotFound desc = could not find container \"1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5\": container with ID starting with 1bd2572a65747e1ba5d5db182ab0c3aaed985e47a74fef711f0411ff817c4ae5 not found: ID does not exist" Jan 12 13:09:37 crc kubenswrapper[4580]: I0112 13:09:37.288545 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" path="/var/lib/kubelet/pods/baa87674-7b1c-4327-92b5-fe9ebaac18f5/volumes" Jan 12 13:09:37 crc kubenswrapper[4580]: I0112 13:09:37.289192 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" path="/var/lib/kubelet/pods/dcee830f-a8da-4e16-95ca-fdaa8dbd86df/volumes" Jan 12 13:09:37 crc kubenswrapper[4580]: I0112 13:09:37.403488 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.019029 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s8vg5" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.134766 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75467d56bf-jlhpc"] Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.134953 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" podUID="c881b720-5fb3-404c-9ccd-9509936f3451" containerName="controller-manager" containerID="cri-o://78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783" gracePeriod=30 Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.237693 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk"] Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.238144 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" podUID="d4e9f5cf-4927-4b13-969a-d11472839c48" containerName="route-controller-manager" containerID="cri-o://dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf" gracePeriod=30 Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.654770 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.663915 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.736092 4580 generic.go:334] "Generic (PLEG): container finished" podID="c881b720-5fb3-404c-9ccd-9509936f3451" containerID="78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783" exitCode=0 Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.736139 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" event={"ID":"c881b720-5fb3-404c-9ccd-9509936f3451","Type":"ContainerDied","Data":"78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783"} Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.736172 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.736199 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75467d56bf-jlhpc" event={"ID":"c881b720-5fb3-404c-9ccd-9509936f3451","Type":"ContainerDied","Data":"717fc628d8a9d8afacbebf778a3c373a3fa884af2ad32db158e297b34d8c33bd"} Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.736222 4580 scope.go:117] "RemoveContainer" containerID="78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.737874 4580 generic.go:334] "Generic (PLEG): container finished" podID="d4e9f5cf-4927-4b13-969a-d11472839c48" containerID="dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf" exitCode=0 Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.737926 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" event={"ID":"d4e9f5cf-4927-4b13-969a-d11472839c48","Type":"ContainerDied","Data":"dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf"} Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.737957 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" event={"ID":"d4e9f5cf-4927-4b13-969a-d11472839c48","Type":"ContainerDied","Data":"a4f262f8c945bc37d83c3467f2c7f0d613871abdbf9542f73d985ed1563b29d3"} Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.738006 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.749188 4580 scope.go:117] "RemoveContainer" containerID="78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783" Jan 12 13:09:38 crc kubenswrapper[4580]: E0112 13:09:38.749601 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783\": container with ID starting with 78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783 not found: ID does not exist" containerID="78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.749641 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783"} err="failed to get container status \"78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783\": rpc error: code = NotFound desc = could not find container \"78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783\": container with ID starting with 78b9e28f0a17056269cb65b340faa92409751c8ec24ad5defac5cb0fc14b3783 not found: ID does not exist" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.749669 4580 scope.go:117] "RemoveContainer" containerID="dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.764351 4580 scope.go:117] "RemoveContainer" containerID="dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf" Jan 12 13:09:38 crc kubenswrapper[4580]: E0112 13:09:38.765123 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf\": container with ID starting with dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf not found: ID does not exist" containerID="dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.765156 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf"} err="failed to get container status \"dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf\": rpc error: code = NotFound desc = could not find container \"dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf\": container with ID starting with dd340b09bf7154ec3fd869bd9881f95276bb1e11be139e4f3b1eb1c028c31fcf not found: ID does not exist" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811021 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfrcp\" (UniqueName: \"kubernetes.io/projected/c881b720-5fb3-404c-9ccd-9509936f3451-kube-api-access-qfrcp\") pod \"c881b720-5fb3-404c-9ccd-9509936f3451\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811092 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c881b720-5fb3-404c-9ccd-9509936f3451-serving-cert\") pod \"c881b720-5fb3-404c-9ccd-9509936f3451\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811163 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-config\") pod \"d4e9f5cf-4927-4b13-969a-d11472839c48\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811191 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8hm\" (UniqueName: \"kubernetes.io/projected/d4e9f5cf-4927-4b13-969a-d11472839c48-kube-api-access-kw8hm\") pod \"d4e9f5cf-4927-4b13-969a-d11472839c48\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811284 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-client-ca\") pod \"c881b720-5fb3-404c-9ccd-9509936f3451\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811346 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-config\") pod \"c881b720-5fb3-404c-9ccd-9509936f3451\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811422 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-proxy-ca-bundles\") pod \"c881b720-5fb3-404c-9ccd-9509936f3451\" (UID: \"c881b720-5fb3-404c-9ccd-9509936f3451\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811464 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-client-ca\") pod \"d4e9f5cf-4927-4b13-969a-d11472839c48\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.811498 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e9f5cf-4927-4b13-969a-d11472839c48-serving-cert\") pod \"d4e9f5cf-4927-4b13-969a-d11472839c48\" (UID: \"d4e9f5cf-4927-4b13-969a-d11472839c48\") " Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.812475 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-client-ca" (OuterVolumeSpecName: "client-ca") pod "c881b720-5fb3-404c-9ccd-9509936f3451" (UID: "c881b720-5fb3-404c-9ccd-9509936f3451"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.812608 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c881b720-5fb3-404c-9ccd-9509936f3451" (UID: "c881b720-5fb3-404c-9ccd-9509936f3451"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.812823 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4e9f5cf-4927-4b13-969a-d11472839c48" (UID: "d4e9f5cf-4927-4b13-969a-d11472839c48"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.812933 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-config" (OuterVolumeSpecName: "config") pod "d4e9f5cf-4927-4b13-969a-d11472839c48" (UID: "d4e9f5cf-4927-4b13-969a-d11472839c48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.813036 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-config" (OuterVolumeSpecName: "config") pod "c881b720-5fb3-404c-9ccd-9509936f3451" (UID: "c881b720-5fb3-404c-9ccd-9509936f3451"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.815455 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c881b720-5fb3-404c-9ccd-9509936f3451-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c881b720-5fb3-404c-9ccd-9509936f3451" (UID: "c881b720-5fb3-404c-9ccd-9509936f3451"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.817094 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c881b720-5fb3-404c-9ccd-9509936f3451-kube-api-access-qfrcp" (OuterVolumeSpecName: "kube-api-access-qfrcp") pod "c881b720-5fb3-404c-9ccd-9509936f3451" (UID: "c881b720-5fb3-404c-9ccd-9509936f3451"). InnerVolumeSpecName "kube-api-access-qfrcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.817561 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e9f5cf-4927-4b13-969a-d11472839c48-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4e9f5cf-4927-4b13-969a-d11472839c48" (UID: "d4e9f5cf-4927-4b13-969a-d11472839c48"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.824136 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e9f5cf-4927-4b13-969a-d11472839c48-kube-api-access-kw8hm" (OuterVolumeSpecName: "kube-api-access-kw8hm") pod "d4e9f5cf-4927-4b13-969a-d11472839c48" (UID: "d4e9f5cf-4927-4b13-969a-d11472839c48"). InnerVolumeSpecName "kube-api-access-kw8hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913590 4580 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-client-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913627 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913637 4580 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c881b720-5fb3-404c-9ccd-9509936f3451-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913651 4580 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-client-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913659 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e9f5cf-4927-4b13-969a-d11472839c48-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913669 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfrcp\" (UniqueName: \"kubernetes.io/projected/c881b720-5fb3-404c-9ccd-9509936f3451-kube-api-access-qfrcp\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913681 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c881b720-5fb3-404c-9ccd-9509936f3451-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913689 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e9f5cf-4927-4b13-969a-d11472839c48-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:38 crc kubenswrapper[4580]: I0112 13:09:38.913698 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw8hm\" (UniqueName: \"kubernetes.io/projected/d4e9f5cf-4927-4b13-969a-d11472839c48-kube-api-access-kw8hm\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.067458 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75467d56bf-jlhpc"] Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.070708 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75467d56bf-jlhpc"] Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.096297 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk"] Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.099499 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9f876bbff-cdcmk"] Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.295702 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c881b720-5fb3-404c-9ccd-9509936f3451" path="/var/lib/kubelet/pods/c881b720-5fb3-404c-9ccd-9509936f3451/volumes" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.298580 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e9f5cf-4927-4b13-969a-d11472839c48" path="/var/lib/kubelet/pods/d4e9f5cf-4927-4b13-969a-d11472839c48/volumes" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.527492 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.592860 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.665678 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6584d7575c-2brqv"] Jan 12 13:09:39 crc kubenswrapper[4580]: E0112 13:09:39.666050 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c881b720-5fb3-404c-9ccd-9509936f3451" containerName="controller-manager" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666075 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="c881b720-5fb3-404c-9ccd-9509936f3451" containerName="controller-manager" Jan 12 13:09:39 crc kubenswrapper[4580]: E0112 13:09:39.666096 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerName="registry-server" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666121 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerName="registry-server" Jan 12 13:09:39 crc kubenswrapper[4580]: E0112 13:09:39.666133 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerName="registry-server" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666140 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerName="registry-server" Jan 12 13:09:39 crc kubenswrapper[4580]: E0112 13:09:39.666155 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerName="extract-utilities" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666162 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerName="extract-utilities" Jan 12 13:09:39 crc kubenswrapper[4580]: E0112 13:09:39.666171 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerName="extract-content" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666177 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerName="extract-content" Jan 12 13:09:39 crc kubenswrapper[4580]: E0112 13:09:39.666186 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e9f5cf-4927-4b13-969a-d11472839c48" containerName="route-controller-manager" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666193 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e9f5cf-4927-4b13-969a-d11472839c48" containerName="route-controller-manager" Jan 12 13:09:39 crc kubenswrapper[4580]: E0112 13:09:39.666207 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerName="extract-utilities" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666216 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerName="extract-utilities" Jan 12 13:09:39 crc kubenswrapper[4580]: E0112 13:09:39.666228 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerName="extract-content" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666241 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerName="extract-content" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666386 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa87674-7b1c-4327-92b5-fe9ebaac18f5" containerName="registry-server" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666401 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e9f5cf-4927-4b13-969a-d11472839c48" containerName="route-controller-manager" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666411 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="c881b720-5fb3-404c-9ccd-9509936f3451" containerName="controller-manager" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666426 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcee830f-a8da-4e16-95ca-fdaa8dbd86df" containerName="registry-server" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.666997 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.669819 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.669850 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.669969 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.670008 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24"] Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.670869 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.671096 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.671525 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.673794 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.674655 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.675447 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.676546 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.676556 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.676598 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.677817 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24"] Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.679800 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.681324 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6584d7575c-2brqv"] Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.685601 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.722601 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-proxy-ca-bundles\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.722708 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40f7ea29-3854-4215-b4f2-92132f43f1e5-serving-cert\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.722768 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwj45\" (UniqueName: \"kubernetes.io/projected/40f7ea29-3854-4215-b4f2-92132f43f1e5-kube-api-access-dwj45\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.722820 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-config\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.723090 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70742540-4546-4618-9f2a-d79d4527687b-serving-cert\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.723170 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-client-ca\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.723312 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpw2n\" (UniqueName: \"kubernetes.io/projected/70742540-4546-4618-9f2a-d79d4527687b-kube-api-access-fpw2n\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.723366 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-config\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.723529 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-client-ca\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824206 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70742540-4546-4618-9f2a-d79d4527687b-serving-cert\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824269 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-client-ca\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824325 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpw2n\" (UniqueName: \"kubernetes.io/projected/70742540-4546-4618-9f2a-d79d4527687b-kube-api-access-fpw2n\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824360 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-config\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824386 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-client-ca\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824418 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-proxy-ca-bundles\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824452 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40f7ea29-3854-4215-b4f2-92132f43f1e5-serving-cert\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824488 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwj45\" (UniqueName: \"kubernetes.io/projected/40f7ea29-3854-4215-b4f2-92132f43f1e5-kube-api-access-dwj45\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.824518 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-config\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.825896 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-client-ca\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.826006 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-client-ca\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.826299 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-config\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.826514 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-config\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.828504 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-proxy-ca-bundles\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.830142 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40f7ea29-3854-4215-b4f2-92132f43f1e5-serving-cert\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.830700 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70742540-4546-4618-9f2a-d79d4527687b-serving-cert\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.839980 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpw2n\" (UniqueName: \"kubernetes.io/projected/70742540-4546-4618-9f2a-d79d4527687b-kube-api-access-fpw2n\") pod \"controller-manager-6584d7575c-2brqv\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.839998 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwj45\" (UniqueName: \"kubernetes.io/projected/40f7ea29-3854-4215-b4f2-92132f43f1e5-kube-api-access-dwj45\") pod \"route-controller-manager-57f5b6cb79-7wp24\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.896478 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.985753 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:39 crc kubenswrapper[4580]: I0112 13:09:39.993947 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.082379 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.183315 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24"] Jan 12 13:09:40 crc kubenswrapper[4580]: W0112 13:09:40.191483 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40f7ea29_3854_4215_b4f2_92132f43f1e5.slice/crio-0a40818debfeca778e457b2ddbdde926f3cb47b0b0b7bc8d7610a2b11b13e604 WatchSource:0}: Error finding container 0a40818debfeca778e457b2ddbdde926f3cb47b0b0b7bc8d7610a2b11b13e604: Status 404 returned error can't find the container with id 0a40818debfeca778e457b2ddbdde926f3cb47b0b0b7bc8d7610a2b11b13e604 Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.452220 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6584d7575c-2brqv"] Jan 12 13:09:40 crc kubenswrapper[4580]: W0112 13:09:40.460312 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70742540_4546_4618_9f2a_d79d4527687b.slice/crio-c66909536f52c4b1177ef3f589e685f4ed1193c8de8db9474a159e448817d3cc WatchSource:0}: Error finding container c66909536f52c4b1177ef3f589e685f4ed1193c8de8db9474a159e448817d3cc: Status 404 returned error can't find the container with id c66909536f52c4b1177ef3f589e685f4ed1193c8de8db9474a159e448817d3cc Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.753526 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" event={"ID":"70742540-4546-4618-9f2a-d79d4527687b","Type":"ContainerStarted","Data":"e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e"} Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.753768 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" event={"ID":"70742540-4546-4618-9f2a-d79d4527687b","Type":"ContainerStarted","Data":"c66909536f52c4b1177ef3f589e685f4ed1193c8de8db9474a159e448817d3cc"} Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.754194 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.755255 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" event={"ID":"40f7ea29-3854-4215-b4f2-92132f43f1e5","Type":"ContainerStarted","Data":"2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677"} Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.755342 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" event={"ID":"40f7ea29-3854-4215-b4f2-92132f43f1e5","Type":"ContainerStarted","Data":"0a40818debfeca778e457b2ddbdde926f3cb47b0b0b7bc8d7610a2b11b13e604"} Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.755742 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.763051 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.770117 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.790339 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" podStartSLOduration=2.7903215169999998 podStartE2EDuration="2.790321517s" podCreationTimestamp="2026-01-12 13:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:40.789595187 +0000 UTC m=+179.833813876" watchObservedRunningTime="2026-01-12 13:09:40.790321517 +0000 UTC m=+179.834540207" Jan 12 13:09:40 crc kubenswrapper[4580]: I0112 13:09:40.807621 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" podStartSLOduration=2.807605933 podStartE2EDuration="2.807605933s" podCreationTimestamp="2026-01-12 13:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:40.80198111 +0000 UTC m=+179.846199800" watchObservedRunningTime="2026-01-12 13:09:40.807605933 +0000 UTC m=+179.851824623" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.211018 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjqmq"] Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.212616 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjqmq" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerName="registry-server" containerID="cri-o://2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07" gracePeriod=2 Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.412736 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drwn8"] Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.412962 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drwn8" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerName="registry-server" containerID="cri-o://86cc2fb65e3cf5dd3c293ac63deddd26ddeae3c90feb2c3447be654c2c335f59" gracePeriod=2 Jan 12 13:09:42 crc kubenswrapper[4580]: E0112 13:09:42.569786 4580 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf8dbed_2e00_49c7_90f3_62f85ef5e078.slice/crio-conmon-86cc2fb65e3cf5dd3c293ac63deddd26ddeae3c90feb2c3447be654c2c335f59.scope\": RecentStats: unable to find data in memory cache]" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.616444 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.760363 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-utilities\") pod \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.760409 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmj47\" (UniqueName: \"kubernetes.io/projected/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-kube-api-access-fmj47\") pod \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.760466 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-catalog-content\") pod \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\" (UID: \"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064\") " Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.761388 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-utilities" (OuterVolumeSpecName: "utilities") pod "0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" (UID: "0b558a84-c6d4-42fe-b9d7-0dd5d63f3064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.767700 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-kube-api-access-fmj47" (OuterVolumeSpecName: "kube-api-access-fmj47") pod "0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" (UID: "0b558a84-c6d4-42fe-b9d7-0dd5d63f3064"). InnerVolumeSpecName "kube-api-access-fmj47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.774080 4580 generic.go:334] "Generic (PLEG): container finished" podID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerID="2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07" exitCode=0 Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.774172 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjqmq" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.774193 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqmq" event={"ID":"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064","Type":"ContainerDied","Data":"2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07"} Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.774242 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjqmq" event={"ID":"0b558a84-c6d4-42fe-b9d7-0dd5d63f3064","Type":"ContainerDied","Data":"92b1be27d43c1372582ea8f85cf5d3c8e343c33f73938e78c2953564b7665836"} Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.774263 4580 scope.go:117] "RemoveContainer" containerID="2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.780272 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwn8" event={"ID":"adf8dbed-2e00-49c7-90f3-62f85ef5e078","Type":"ContainerDied","Data":"86cc2fb65e3cf5dd3c293ac63deddd26ddeae3c90feb2c3447be654c2c335f59"} Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.780380 4580 generic.go:334] "Generic (PLEG): container finished" podID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerID="86cc2fb65e3cf5dd3c293ac63deddd26ddeae3c90feb2c3447be654c2c335f59" exitCode=0 Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.792737 4580 scope.go:117] "RemoveContainer" containerID="d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.801856 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" (UID: "0b558a84-c6d4-42fe-b9d7-0dd5d63f3064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.814895 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.816256 4580 scope.go:117] "RemoveContainer" containerID="113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.829153 4580 scope.go:117] "RemoveContainer" containerID="2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07" Jan 12 13:09:42 crc kubenswrapper[4580]: E0112 13:09:42.829560 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07\": container with ID starting with 2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07 not found: ID does not exist" containerID="2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.829602 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07"} err="failed to get container status \"2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07\": rpc error: code = NotFound desc = could not find container \"2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07\": container with ID starting with 2fd18632453241117d3ef76f2b90edd1983c324e03502df844c78227e4ae2c07 not found: ID does not exist" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.829622 4580 scope.go:117] "RemoveContainer" containerID="d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5" Jan 12 13:09:42 crc kubenswrapper[4580]: E0112 13:09:42.829918 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5\": container with ID starting with d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5 not found: ID does not exist" containerID="d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.829949 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5"} err="failed to get container status \"d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5\": rpc error: code = NotFound desc = could not find container \"d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5\": container with ID starting with d5cda74e3aee77cfcb6c1c3e7eed2a967faa4db35c79433814ebd7524c03e7b5 not found: ID does not exist" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.829978 4580 scope.go:117] "RemoveContainer" containerID="113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33" Jan 12 13:09:42 crc kubenswrapper[4580]: E0112 13:09:42.830289 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33\": container with ID starting with 113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33 not found: ID does not exist" containerID="113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.830343 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33"} err="failed to get container status \"113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33\": rpc error: code = NotFound desc = could not find container \"113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33\": container with ID starting with 113bfd4e3f334eeaf5adeb777c413130a008634ae484d3bd0c3cd4a90b18fb33 not found: ID does not exist" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.862176 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lft6q\" (UniqueName: \"kubernetes.io/projected/adf8dbed-2e00-49c7-90f3-62f85ef5e078-kube-api-access-lft6q\") pod \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.862229 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-catalog-content\") pod \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.863956 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.864003 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.864017 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmj47\" (UniqueName: \"kubernetes.io/projected/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064-kube-api-access-fmj47\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.864384 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf8dbed-2e00-49c7-90f3-62f85ef5e078-kube-api-access-lft6q" (OuterVolumeSpecName: "kube-api-access-lft6q") pod "adf8dbed-2e00-49c7-90f3-62f85ef5e078" (UID: "adf8dbed-2e00-49c7-90f3-62f85ef5e078"). InnerVolumeSpecName "kube-api-access-lft6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.904041 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adf8dbed-2e00-49c7-90f3-62f85ef5e078" (UID: "adf8dbed-2e00-49c7-90f3-62f85ef5e078"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.965130 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-utilities\") pod \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\" (UID: \"adf8dbed-2e00-49c7-90f3-62f85ef5e078\") " Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.965609 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lft6q\" (UniqueName: \"kubernetes.io/projected/adf8dbed-2e00-49c7-90f3-62f85ef5e078-kube-api-access-lft6q\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.965633 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:42 crc kubenswrapper[4580]: I0112 13:09:42.965920 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-utilities" (OuterVolumeSpecName: "utilities") pod "adf8dbed-2e00-49c7-90f3-62f85ef5e078" (UID: "adf8dbed-2e00-49c7-90f3-62f85ef5e078"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.066454 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf8dbed-2e00-49c7-90f3-62f85ef5e078-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.099299 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjqmq"] Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.105430 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjqmq"] Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.289552 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" path="/var/lib/kubelet/pods/0b558a84-c6d4-42fe-b9d7-0dd5d63f3064/volumes" Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.789807 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwn8" event={"ID":"adf8dbed-2e00-49c7-90f3-62f85ef5e078","Type":"ContainerDied","Data":"a8f807914a3538b53b53c02bc7273ac4a4903ab777c732abb418df65895499bf"} Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.789843 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwn8" Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.790438 4580 scope.go:117] "RemoveContainer" containerID="86cc2fb65e3cf5dd3c293ac63deddd26ddeae3c90feb2c3447be654c2c335f59" Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.804823 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drwn8"] Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.807683 4580 scope.go:117] "RemoveContainer" containerID="1a4b35aa22daf45c6354dfa9a5bcd4748cec8e68cd44e7b2415a0d1dc9566e94" Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.810319 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drwn8"] Jan 12 13:09:43 crc kubenswrapper[4580]: I0112 13:09:43.819780 4580 scope.go:117] "RemoveContainer" containerID="323c587f2b49555ead9c3f16f974e23314a3480a0f6e33b7a8ad71315ae4501d" Jan 12 13:09:45 crc kubenswrapper[4580]: I0112 13:09:45.293370 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" path="/var/lib/kubelet/pods/adf8dbed-2e00-49c7-90f3-62f85ef5e078/volumes" Jan 12 13:09:45 crc kubenswrapper[4580]: I0112 13:09:45.454626 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sbrm"] Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554587 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 12 13:09:46 crc kubenswrapper[4580]: E0112 13:09:46.554801 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerName="registry-server" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554814 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerName="registry-server" Jan 12 13:09:46 crc kubenswrapper[4580]: E0112 13:09:46.554822 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerName="extract-utilities" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554827 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerName="extract-utilities" Jan 12 13:09:46 crc kubenswrapper[4580]: E0112 13:09:46.554835 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerName="extract-utilities" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554841 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerName="extract-utilities" Jan 12 13:09:46 crc kubenswrapper[4580]: E0112 13:09:46.554849 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerName="registry-server" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554854 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerName="registry-server" Jan 12 13:09:46 crc kubenswrapper[4580]: E0112 13:09:46.554865 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerName="extract-content" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554870 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerName="extract-content" Jan 12 13:09:46 crc kubenswrapper[4580]: E0112 13:09:46.554878 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerName="extract-content" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554884 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerName="extract-content" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554978 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b558a84-c6d4-42fe-b9d7-0dd5d63f3064" containerName="registry-server" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.554987 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf8dbed-2e00-49c7-90f3-62f85ef5e078" containerName="registry-server" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.555390 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.557035 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.557598 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.562899 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.609503 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434aa1ee-5650-4a72-a41f-d91449e51d76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"434aa1ee-5650-4a72-a41f-d91449e51d76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.609550 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434aa1ee-5650-4a72-a41f-d91449e51d76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"434aa1ee-5650-4a72-a41f-d91449e51d76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.712190 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434aa1ee-5650-4a72-a41f-d91449e51d76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"434aa1ee-5650-4a72-a41f-d91449e51d76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.712277 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434aa1ee-5650-4a72-a41f-d91449e51d76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"434aa1ee-5650-4a72-a41f-d91449e51d76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.712335 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434aa1ee-5650-4a72-a41f-d91449e51d76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"434aa1ee-5650-4a72-a41f-d91449e51d76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.733784 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434aa1ee-5650-4a72-a41f-d91449e51d76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"434aa1ee-5650-4a72-a41f-d91449e51d76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.870505 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.950154 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:09:46 crc kubenswrapper[4580]: I0112 13:09:46.950659 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:09:47 crc kubenswrapper[4580]: I0112 13:09:47.262789 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 12 13:09:47 crc kubenswrapper[4580]: I0112 13:09:47.842701 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"434aa1ee-5650-4a72-a41f-d91449e51d76","Type":"ContainerStarted","Data":"aa92b2fd6cbddd5bf179fa27b2417e05fa9455c1d604287abcdfff8dbbe6b835"} Jan 12 13:09:47 crc kubenswrapper[4580]: I0112 13:09:47.843003 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"434aa1ee-5650-4a72-a41f-d91449e51d76","Type":"ContainerStarted","Data":"5bdf7de5068163f954257ffcae18f140c9ff9365d9c027955f85f993441ca036"} Jan 12 13:09:47 crc kubenswrapper[4580]: I0112 13:09:47.854648 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.85463255 podStartE2EDuration="1.85463255s" podCreationTimestamp="2026-01-12 13:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:47.854270432 +0000 UTC m=+186.898489122" watchObservedRunningTime="2026-01-12 13:09:47.85463255 +0000 UTC m=+186.898851240" Jan 12 13:09:48 crc kubenswrapper[4580]: I0112 13:09:48.849052 4580 generic.go:334] "Generic (PLEG): container finished" podID="434aa1ee-5650-4a72-a41f-d91449e51d76" containerID="aa92b2fd6cbddd5bf179fa27b2417e05fa9455c1d604287abcdfff8dbbe6b835" exitCode=0 Jan 12 13:09:48 crc kubenswrapper[4580]: I0112 13:09:48.849122 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"434aa1ee-5650-4a72-a41f-d91449e51d76","Type":"ContainerDied","Data":"aa92b2fd6cbddd5bf179fa27b2417e05fa9455c1d604287abcdfff8dbbe6b835"} Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.127437 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.156160 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434aa1ee-5650-4a72-a41f-d91449e51d76-kube-api-access\") pod \"434aa1ee-5650-4a72-a41f-d91449e51d76\" (UID: \"434aa1ee-5650-4a72-a41f-d91449e51d76\") " Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.156246 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434aa1ee-5650-4a72-a41f-d91449e51d76-kubelet-dir\") pod \"434aa1ee-5650-4a72-a41f-d91449e51d76\" (UID: \"434aa1ee-5650-4a72-a41f-d91449e51d76\") " Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.156370 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/434aa1ee-5650-4a72-a41f-d91449e51d76-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "434aa1ee-5650-4a72-a41f-d91449e51d76" (UID: "434aa1ee-5650-4a72-a41f-d91449e51d76"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.161996 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434aa1ee-5650-4a72-a41f-d91449e51d76-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "434aa1ee-5650-4a72-a41f-d91449e51d76" (UID: "434aa1ee-5650-4a72-a41f-d91449e51d76"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.257355 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434aa1ee-5650-4a72-a41f-d91449e51d76-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.257389 4580 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434aa1ee-5650-4a72-a41f-d91449e51d76-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.861263 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"434aa1ee-5650-4a72-a41f-d91449e51d76","Type":"ContainerDied","Data":"5bdf7de5068163f954257ffcae18f140c9ff9365d9c027955f85f993441ca036"} Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.861308 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bdf7de5068163f954257ffcae18f140c9ff9365d9c027955f85f993441ca036" Jan 12 13:09:50 crc kubenswrapper[4580]: I0112 13:09:50.861335 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.167694 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 12 13:09:52 crc kubenswrapper[4580]: E0112 13:09:52.168398 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434aa1ee-5650-4a72-a41f-d91449e51d76" containerName="pruner" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.168413 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="434aa1ee-5650-4a72-a41f-d91449e51d76" containerName="pruner" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.168571 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="434aa1ee-5650-4a72-a41f-d91449e51d76" containerName="pruner" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.169118 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.172187 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.172478 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.173952 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.283722 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30474fce-6a17-410e-8b71-7fc76ae2835c-kube-api-access\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.283799 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.283854 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-var-lock\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.384708 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30474fce-6a17-410e-8b71-7fc76ae2835c-kube-api-access\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.384777 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.384815 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-var-lock\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.384909 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-var-lock\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.385346 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.405264 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30474fce-6a17-410e-8b71-7fc76ae2835c-kube-api-access\") pod \"installer-9-crc\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.490336 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.803642 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 12 13:09:52 crc kubenswrapper[4580]: I0112 13:09:52.884722 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"30474fce-6a17-410e-8b71-7fc76ae2835c","Type":"ContainerStarted","Data":"ae0caf23be6ed8470fa648c2238eca3a1b66a5da2587575a9646322965d36f47"} Jan 12 13:09:53 crc kubenswrapper[4580]: I0112 13:09:53.893901 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"30474fce-6a17-410e-8b71-7fc76ae2835c","Type":"ContainerStarted","Data":"ba347f93ac94b4a145c59c1cd7b42dd1438f91eef226679ce0e0a2f5c6bad40f"} Jan 12 13:09:53 crc kubenswrapper[4580]: I0112 13:09:53.906821 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.9067978380000001 podStartE2EDuration="1.906797838s" podCreationTimestamp="2026-01-12 13:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:09:53.905549421 +0000 UTC m=+192.949768112" watchObservedRunningTime="2026-01-12 13:09:53.906797838 +0000 UTC m=+192.951016528" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.127965 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6584d7575c-2brqv"] Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.128588 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" podUID="70742540-4546-4618-9f2a-d79d4527687b" containerName="controller-manager" containerID="cri-o://e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e" gracePeriod=30 Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.137695 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24"] Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.137897 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" podUID="40f7ea29-3854-4215-b4f2-92132f43f1e5" containerName="route-controller-manager" containerID="cri-o://2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677" gracePeriod=30 Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.629324 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.659600 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773614 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-client-ca\") pod \"40f7ea29-3854-4215-b4f2-92132f43f1e5\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773674 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-proxy-ca-bundles\") pod \"70742540-4546-4618-9f2a-d79d4527687b\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773732 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40f7ea29-3854-4215-b4f2-92132f43f1e5-serving-cert\") pod \"40f7ea29-3854-4215-b4f2-92132f43f1e5\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773763 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-config\") pod \"40f7ea29-3854-4215-b4f2-92132f43f1e5\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773821 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpw2n\" (UniqueName: \"kubernetes.io/projected/70742540-4546-4618-9f2a-d79d4527687b-kube-api-access-fpw2n\") pod \"70742540-4546-4618-9f2a-d79d4527687b\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773847 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70742540-4546-4618-9f2a-d79d4527687b-serving-cert\") pod \"70742540-4546-4618-9f2a-d79d4527687b\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773874 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-config\") pod \"70742540-4546-4618-9f2a-d79d4527687b\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773903 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwj45\" (UniqueName: \"kubernetes.io/projected/40f7ea29-3854-4215-b4f2-92132f43f1e5-kube-api-access-dwj45\") pod \"40f7ea29-3854-4215-b4f2-92132f43f1e5\" (UID: \"40f7ea29-3854-4215-b4f2-92132f43f1e5\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.773926 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-client-ca\") pod \"70742540-4546-4618-9f2a-d79d4527687b\" (UID: \"70742540-4546-4618-9f2a-d79d4527687b\") " Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.774515 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "40f7ea29-3854-4215-b4f2-92132f43f1e5" (UID: "40f7ea29-3854-4215-b4f2-92132f43f1e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.774938 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-config" (OuterVolumeSpecName: "config") pod "40f7ea29-3854-4215-b4f2-92132f43f1e5" (UID: "40f7ea29-3854-4215-b4f2-92132f43f1e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.775068 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70742540-4546-4618-9f2a-d79d4527687b" (UID: "70742540-4546-4618-9f2a-d79d4527687b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.775545 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-client-ca" (OuterVolumeSpecName: "client-ca") pod "70742540-4546-4618-9f2a-d79d4527687b" (UID: "70742540-4546-4618-9f2a-d79d4527687b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.775736 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-config" (OuterVolumeSpecName: "config") pod "70742540-4546-4618-9f2a-d79d4527687b" (UID: "70742540-4546-4618-9f2a-d79d4527687b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.779592 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f7ea29-3854-4215-b4f2-92132f43f1e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "40f7ea29-3854-4215-b4f2-92132f43f1e5" (UID: "40f7ea29-3854-4215-b4f2-92132f43f1e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.779667 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f7ea29-3854-4215-b4f2-92132f43f1e5-kube-api-access-dwj45" (OuterVolumeSpecName: "kube-api-access-dwj45") pod "40f7ea29-3854-4215-b4f2-92132f43f1e5" (UID: "40f7ea29-3854-4215-b4f2-92132f43f1e5"). InnerVolumeSpecName "kube-api-access-dwj45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.780024 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70742540-4546-4618-9f2a-d79d4527687b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70742540-4546-4618-9f2a-d79d4527687b" (UID: "70742540-4546-4618-9f2a-d79d4527687b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.780271 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70742540-4546-4618-9f2a-d79d4527687b-kube-api-access-fpw2n" (OuterVolumeSpecName: "kube-api-access-fpw2n") pod "70742540-4546-4618-9f2a-d79d4527687b" (UID: "70742540-4546-4618-9f2a-d79d4527687b"). InnerVolumeSpecName "kube-api-access-fpw2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875774 4580 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875817 4580 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875829 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40f7ea29-3854-4215-b4f2-92132f43f1e5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875839 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f7ea29-3854-4215-b4f2-92132f43f1e5-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875849 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpw2n\" (UniqueName: \"kubernetes.io/projected/70742540-4546-4618-9f2a-d79d4527687b-kube-api-access-fpw2n\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875859 4580 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70742540-4546-4618-9f2a-d79d4527687b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875901 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875910 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwj45\" (UniqueName: \"kubernetes.io/projected/40f7ea29-3854-4215-b4f2-92132f43f1e5-kube-api-access-dwj45\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.875918 4580 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70742540-4546-4618-9f2a-d79d4527687b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.929518 4580 generic.go:334] "Generic (PLEG): container finished" podID="40f7ea29-3854-4215-b4f2-92132f43f1e5" containerID="2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677" exitCode=0 Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.929580 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.929578 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" event={"ID":"40f7ea29-3854-4215-b4f2-92132f43f1e5","Type":"ContainerDied","Data":"2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677"} Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.929650 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24" event={"ID":"40f7ea29-3854-4215-b4f2-92132f43f1e5","Type":"ContainerDied","Data":"0a40818debfeca778e457b2ddbdde926f3cb47b0b0b7bc8d7610a2b11b13e604"} Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.929675 4580 scope.go:117] "RemoveContainer" containerID="2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.931439 4580 generic.go:334] "Generic (PLEG): container finished" podID="70742540-4546-4618-9f2a-d79d4527687b" containerID="e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e" exitCode=0 Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.931489 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" event={"ID":"70742540-4546-4618-9f2a-d79d4527687b","Type":"ContainerDied","Data":"e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e"} Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.931529 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" event={"ID":"70742540-4546-4618-9f2a-d79d4527687b","Type":"ContainerDied","Data":"c66909536f52c4b1177ef3f589e685f4ed1193c8de8db9474a159e448817d3cc"} Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.931603 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6584d7575c-2brqv" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.943442 4580 scope.go:117] "RemoveContainer" containerID="2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677" Jan 12 13:09:58 crc kubenswrapper[4580]: E0112 13:09:58.943737 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677\": container with ID starting with 2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677 not found: ID does not exist" containerID="2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.943786 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677"} err="failed to get container status \"2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677\": rpc error: code = NotFound desc = could not find container \"2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677\": container with ID starting with 2dead4f47378a3045a110ec4524ca94c616667a685cf935bb612919d7594a677 not found: ID does not exist" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.943807 4580 scope.go:117] "RemoveContainer" containerID="e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.960528 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6584d7575c-2brqv"] Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.961423 4580 scope.go:117] "RemoveContainer" containerID="e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e" Jan 12 13:09:58 crc kubenswrapper[4580]: E0112 13:09:58.962206 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e\": container with ID starting with e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e not found: ID does not exist" containerID="e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.962248 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e"} err="failed to get container status \"e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e\": rpc error: code = NotFound desc = could not find container \"e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e\": container with ID starting with e46077da5a14191b91a00e2e4cc5aac4eca2f77f8cbd18a50d9b34806973e36e not found: ID does not exist" Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.966528 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6584d7575c-2brqv"] Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.980255 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24"] Jan 12 13:09:58 crc kubenswrapper[4580]: I0112 13:09:58.982390 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57f5b6cb79-7wp24"] Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.288816 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f7ea29-3854-4215-b4f2-92132f43f1e5" path="/var/lib/kubelet/pods/40f7ea29-3854-4215-b4f2-92132f43f1e5/volumes" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.289415 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70742540-4546-4618-9f2a-d79d4527687b" path="/var/lib/kubelet/pods/70742540-4546-4618-9f2a-d79d4527687b/volumes" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.677120 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq"] Jan 12 13:09:59 crc kubenswrapper[4580]: E0112 13:09:59.677372 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f7ea29-3854-4215-b4f2-92132f43f1e5" containerName="route-controller-manager" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.677390 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f7ea29-3854-4215-b4f2-92132f43f1e5" containerName="route-controller-manager" Jan 12 13:09:59 crc kubenswrapper[4580]: E0112 13:09:59.677410 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70742540-4546-4618-9f2a-d79d4527687b" containerName="controller-manager" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.677417 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="70742540-4546-4618-9f2a-d79d4527687b" containerName="controller-manager" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.677524 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f7ea29-3854-4215-b4f2-92132f43f1e5" containerName="route-controller-manager" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.677544 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="70742540-4546-4618-9f2a-d79d4527687b" containerName="controller-manager" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.677957 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.679588 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk"] Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.680095 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.680190 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.680279 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.680417 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.680825 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.681532 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.682156 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.682478 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.682735 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.682766 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.682813 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.682911 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683257 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683506 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ff9cca-4800-44a7-b868-5baba239d07d-config\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683581 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/577c604a-8fc7-4ea6-93cc-64930507830e-serving-cert\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683628 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-client-ca\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683652 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8ff9cca-4800-44a7-b868-5baba239d07d-client-ca\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683673 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tct5s\" (UniqueName: \"kubernetes.io/projected/577c604a-8fc7-4ea6-93cc-64930507830e-kube-api-access-tct5s\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683703 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-config\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683720 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8ff9cca-4800-44a7-b868-5baba239d07d-serving-cert\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683752 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-proxy-ca-bundles\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.683777 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s82k\" (UniqueName: \"kubernetes.io/projected/c8ff9cca-4800-44a7-b868-5baba239d07d-kube-api-access-2s82k\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.686858 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk"] Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.690255 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.692001 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq"] Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.784918 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s82k\" (UniqueName: \"kubernetes.io/projected/c8ff9cca-4800-44a7-b868-5baba239d07d-kube-api-access-2s82k\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.784984 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ff9cca-4800-44a7-b868-5baba239d07d-config\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.785063 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/577c604a-8fc7-4ea6-93cc-64930507830e-serving-cert\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.785142 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-client-ca\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.785171 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8ff9cca-4800-44a7-b868-5baba239d07d-client-ca\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.785202 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tct5s\" (UniqueName: \"kubernetes.io/projected/577c604a-8fc7-4ea6-93cc-64930507830e-kube-api-access-tct5s\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.785244 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-config\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.785264 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8ff9cca-4800-44a7-b868-5baba239d07d-serving-cert\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.785283 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-proxy-ca-bundles\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.786350 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-client-ca\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.786540 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ff9cca-4800-44a7-b868-5baba239d07d-config\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.786903 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8ff9cca-4800-44a7-b868-5baba239d07d-client-ca\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.787358 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-config\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.788595 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/577c604a-8fc7-4ea6-93cc-64930507830e-proxy-ca-bundles\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.790396 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8ff9cca-4800-44a7-b868-5baba239d07d-serving-cert\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.790893 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/577c604a-8fc7-4ea6-93cc-64930507830e-serving-cert\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.800170 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s82k\" (UniqueName: \"kubernetes.io/projected/c8ff9cca-4800-44a7-b868-5baba239d07d-kube-api-access-2s82k\") pod \"route-controller-manager-55df48bb8f-ssqtq\" (UID: \"c8ff9cca-4800-44a7-b868-5baba239d07d\") " pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.800416 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tct5s\" (UniqueName: \"kubernetes.io/projected/577c604a-8fc7-4ea6-93cc-64930507830e-kube-api-access-tct5s\") pod \"controller-manager-7b5c7ddb96-qg9fk\" (UID: \"577c604a-8fc7-4ea6-93cc-64930507830e\") " pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.993662 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:09:59 crc kubenswrapper[4580]: I0112 13:09:59.999073 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.380838 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk"] Jan 12 13:10:00 crc kubenswrapper[4580]: W0112 13:10:00.387186 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod577c604a_8fc7_4ea6_93cc_64930507830e.slice/crio-1de09c229e4107aa0284b7b9843c6b59bca8afd102422a8e76caf5ac57c1071e WatchSource:0}: Error finding container 1de09c229e4107aa0284b7b9843c6b59bca8afd102422a8e76caf5ac57c1071e: Status 404 returned error can't find the container with id 1de09c229e4107aa0284b7b9843c6b59bca8afd102422a8e76caf5ac57c1071e Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.417509 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq"] Jan 12 13:10:00 crc kubenswrapper[4580]: W0112 13:10:00.418822 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8ff9cca_4800_44a7_b868_5baba239d07d.slice/crio-298a908c1774506acd73bfed4fe8e03cf15f3ab3af88aeac79cc035146896fb2 WatchSource:0}: Error finding container 298a908c1774506acd73bfed4fe8e03cf15f3ab3af88aeac79cc035146896fb2: Status 404 returned error can't find the container with id 298a908c1774506acd73bfed4fe8e03cf15f3ab3af88aeac79cc035146896fb2 Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.948626 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" event={"ID":"c8ff9cca-4800-44a7-b868-5baba239d07d","Type":"ContainerStarted","Data":"beeba3509cfe3a50e46f7d782592d71d6e6e1a870173f2cdae6a1ae39b0926c3"} Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.948688 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" event={"ID":"c8ff9cca-4800-44a7-b868-5baba239d07d","Type":"ContainerStarted","Data":"298a908c1774506acd73bfed4fe8e03cf15f3ab3af88aeac79cc035146896fb2"} Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.949030 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.950073 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" event={"ID":"577c604a-8fc7-4ea6-93cc-64930507830e","Type":"ContainerStarted","Data":"40780dd54c13bb3bdfb10b304bf99a77dfc9ef18f2fe969e316023088af327d7"} Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.950195 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" event={"ID":"577c604a-8fc7-4ea6-93cc-64930507830e","Type":"ContainerStarted","Data":"1de09c229e4107aa0284b7b9843c6b59bca8afd102422a8e76caf5ac57c1071e"} Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.950258 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.953265 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.954177 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.965190 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55df48bb8f-ssqtq" podStartSLOduration=2.965172399 podStartE2EDuration="2.965172399s" podCreationTimestamp="2026-01-12 13:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:10:00.96287951 +0000 UTC m=+200.007098201" watchObservedRunningTime="2026-01-12 13:10:00.965172399 +0000 UTC m=+200.009391090" Jan 12 13:10:00 crc kubenswrapper[4580]: I0112 13:10:00.977031 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b5c7ddb96-qg9fk" podStartSLOduration=2.977015119 podStartE2EDuration="2.977015119s" podCreationTimestamp="2026-01-12 13:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:10:00.975465185 +0000 UTC m=+200.019683885" watchObservedRunningTime="2026-01-12 13:10:00.977015119 +0000 UTC m=+200.021233809" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.481671 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" podUID="7db5f72b-6a3e-4a3d-96bd-3e10756b605c" containerName="oauth-openshift" containerID="cri-o://dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5" gracePeriod=15 Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.892094 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.916466 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j"] Jan 12 13:10:10 crc kubenswrapper[4580]: E0112 13:10:10.916705 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db5f72b-6a3e-4a3d-96bd-3e10756b605c" containerName="oauth-openshift" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.916724 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db5f72b-6a3e-4a3d-96bd-3e10756b605c" containerName="oauth-openshift" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.916844 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db5f72b-6a3e-4a3d-96bd-3e10756b605c" containerName="oauth-openshift" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.917289 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.929683 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j"] Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936694 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-dir\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936758 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936805 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936870 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936897 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936922 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936954 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msqlv\" (UniqueName: \"kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936970 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.936997 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937019 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937047 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937084 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937132 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937167 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs\") pod \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\" (UID: \"7db5f72b-6a3e-4a3d-96bd-3e10756b605c\") " Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937464 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937517 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937570 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbsf\" (UniqueName: \"kubernetes.io/projected/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-kube-api-access-6xbsf\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937605 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937640 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-session\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937660 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937709 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937781 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937804 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937820 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937859 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937885 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.937939 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.938016 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-audit-policies\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.938047 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-audit-dir\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.938124 4580 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.938348 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.939731 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.940311 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.940322 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.952579 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.952878 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.953020 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.953087 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.953560 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv" (OuterVolumeSpecName: "kube-api-access-msqlv") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "kube-api-access-msqlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.955896 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.957040 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.957394 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:10 crc kubenswrapper[4580]: I0112 13:10:10.959189 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7db5f72b-6a3e-4a3d-96bd-3e10756b605c" (UID: "7db5f72b-6a3e-4a3d-96bd-3e10756b605c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.006006 4580 generic.go:334] "Generic (PLEG): container finished" podID="7db5f72b-6a3e-4a3d-96bd-3e10756b605c" containerID="dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5" exitCode=0 Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.006052 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" event={"ID":"7db5f72b-6a3e-4a3d-96bd-3e10756b605c","Type":"ContainerDied","Data":"dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5"} Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.006091 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" event={"ID":"7db5f72b-6a3e-4a3d-96bd-3e10756b605c","Type":"ContainerDied","Data":"8af755dc9b711216d5e9676aa78b28350b03a68f5eaea8c8136bcc92dfb9880d"} Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.006140 4580 scope.go:117] "RemoveContainer" containerID="dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.006160 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8sbrm" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.028284 4580 scope.go:117] "RemoveContainer" containerID="dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5" Jan 12 13:10:11 crc kubenswrapper[4580]: E0112 13:10:11.030664 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5\": container with ID starting with dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5 not found: ID does not exist" containerID="dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.030731 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5"} err="failed to get container status \"dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5\": rpc error: code = NotFound desc = could not find container \"dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5\": container with ID starting with dec8eaf6aa627a54d35c903dadc9e0377962efe122776e533fa5ed3060061ed5 not found: ID does not exist" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.039832 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sbrm"] Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040183 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040222 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040257 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040282 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040319 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040364 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-audit-policies\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040393 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-audit-dir\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040442 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040468 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xbsf\" (UniqueName: \"kubernetes.io/projected/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-kube-api-access-6xbsf\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040492 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040518 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-session\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040540 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040571 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040616 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040659 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040675 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040690 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msqlv\" (UniqueName: \"kubernetes.io/projected/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-kube-api-access-msqlv\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040702 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040712 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040722 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040734 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040747 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.040942 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.042065 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-audit-policies\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.042072 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.042648 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.042713 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.042887 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-audit-dir\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.042843 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.043592 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.043618 4580 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.043644 4580 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7db5f72b-6a3e-4a3d-96bd-3e10756b605c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.043748 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.044710 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.044938 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.045146 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.045953 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.046239 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-system-session\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.046298 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.046564 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.048680 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8sbrm"] Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.059273 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xbsf\" (UniqueName: \"kubernetes.io/projected/0403440f-a4b6-4ba4-9ada-70b782ba7bc7-kube-api-access-6xbsf\") pod \"oauth-openshift-55c7c67b6b-jdh2j\" (UID: \"0403440f-a4b6-4ba4-9ada-70b782ba7bc7\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.228705 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.288999 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db5f72b-6a3e-4a3d-96bd-3e10756b605c" path="/var/lib/kubelet/pods/7db5f72b-6a3e-4a3d-96bd-3e10756b605c/volumes" Jan 12 13:10:11 crc kubenswrapper[4580]: I0112 13:10:11.597648 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j"] Jan 12 13:10:12 crc kubenswrapper[4580]: I0112 13:10:12.013636 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" event={"ID":"0403440f-a4b6-4ba4-9ada-70b782ba7bc7","Type":"ContainerStarted","Data":"897279543a4cc1cb245480d66db3d84390c068a272cd17c2ca9decc652bba9c9"} Jan 12 13:10:12 crc kubenswrapper[4580]: I0112 13:10:12.013940 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:12 crc kubenswrapper[4580]: I0112 13:10:12.013959 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" event={"ID":"0403440f-a4b6-4ba4-9ada-70b782ba7bc7","Type":"ContainerStarted","Data":"f82053a94169f69b66cbf6fab91c7aa17707baef7eb6d67b3cc3f0508a25cff0"} Jan 12 13:10:12 crc kubenswrapper[4580]: I0112 13:10:12.035723 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" podStartSLOduration=27.035691568 podStartE2EDuration="27.035691568s" podCreationTimestamp="2026-01-12 13:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:10:12.031919688 +0000 UTC m=+211.076138379" watchObservedRunningTime="2026-01-12 13:10:12.035691568 +0000 UTC m=+211.079910258" Jan 12 13:10:12 crc kubenswrapper[4580]: I0112 13:10:12.091664 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55c7c67b6b-jdh2j" Jan 12 13:10:16 crc kubenswrapper[4580]: I0112 13:10:16.949771 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:10:16 crc kubenswrapper[4580]: I0112 13:10:16.950186 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:10:16 crc kubenswrapper[4580]: I0112 13:10:16.950249 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:10:16 crc kubenswrapper[4580]: I0112 13:10:16.954121 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:10:16 crc kubenswrapper[4580]: I0112 13:10:16.954194 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb" gracePeriod=600 Jan 12 13:10:18 crc kubenswrapper[4580]: I0112 13:10:18.047878 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb" exitCode=0 Jan 12 13:10:18 crc kubenswrapper[4580]: I0112 13:10:18.047957 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb"} Jan 12 13:10:18 crc kubenswrapper[4580]: I0112 13:10:18.048459 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"1689fbe54ea63924eb5436687ff3624dfc8f05694ffc76352754b1bc5a4e1401"} Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.734551 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r98fk"] Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.735384 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r98fk" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerName="registry-server" containerID="cri-o://f4c3673086bf4a7cfae5c40af384963e637179db320a8e40b9bc0a1d303a0fc7" gracePeriod=30 Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.739946 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2l4gk"] Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.740304 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2l4gk" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" containerName="registry-server" containerID="cri-o://d1568fc031a9b4fb1a172f5e179c472c630fe35f7ea4b9f5de9760f78ffa00ce" gracePeriod=30 Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.745992 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hlckg"] Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.746253 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" podUID="170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" containerName="marketplace-operator" containerID="cri-o://99b0845d1c96ecd36ed14e772d527fa10f416983d083bf1a071dc1f958a41e6a" gracePeriod=30 Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.754690 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbkw4"] Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.754954 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jbkw4" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerName="registry-server" containerID="cri-o://3b698a7623bceb5141f2ce989da151b74a925fbec398291710f61b21d7fcc8a9" gracePeriod=30 Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.763418 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27x9v"] Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.763602 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-27x9v" podUID="86490260-47c3-47d2-beca-c61e661882ca" containerName="registry-server" containerID="cri-o://e15ac3e18763526d9f2f0cb9a4613cd4641cef85563dc907f42c500fae17360d" gracePeriod=30 Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.777958 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n599t"] Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.778892 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.780977 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n599t"] Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.854012 4580 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hlckg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.854137 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" podUID="170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.856246 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e207fa-a98f-4554-8ed8-67ffaa6e5955-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.856311 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53e207fa-a98f-4554-8ed8-67ffaa6e5955-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.856360 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bqbj\" (UniqueName: \"kubernetes.io/projected/53e207fa-a98f-4554-8ed8-67ffaa6e5955-kube-api-access-4bqbj\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.957296 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bqbj\" (UniqueName: \"kubernetes.io/projected/53e207fa-a98f-4554-8ed8-67ffaa6e5955-kube-api-access-4bqbj\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.957362 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e207fa-a98f-4554-8ed8-67ffaa6e5955-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.957391 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53e207fa-a98f-4554-8ed8-67ffaa6e5955-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.958773 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e207fa-a98f-4554-8ed8-67ffaa6e5955-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.962854 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/53e207fa-a98f-4554-8ed8-67ffaa6e5955-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:27 crc kubenswrapper[4580]: I0112 13:10:27.972395 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bqbj\" (UniqueName: \"kubernetes.io/projected/53e207fa-a98f-4554-8ed8-67ffaa6e5955-kube-api-access-4bqbj\") pod \"marketplace-operator-79b997595-n599t\" (UID: \"53e207fa-a98f-4554-8ed8-67ffaa6e5955\") " pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.112797 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.112981 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r98fk" event={"ID":"10b26e31-b5d9-491a-863a-1cc0a102eae8","Type":"ContainerDied","Data":"f4c3673086bf4a7cfae5c40af384963e637179db320a8e40b9bc0a1d303a0fc7"} Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.112953 4580 generic.go:334] "Generic (PLEG): container finished" podID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerID="f4c3673086bf4a7cfae5c40af384963e637179db320a8e40b9bc0a1d303a0fc7" exitCode=0 Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.114579 4580 generic.go:334] "Generic (PLEG): container finished" podID="170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" containerID="99b0845d1c96ecd36ed14e772d527fa10f416983d083bf1a071dc1f958a41e6a" exitCode=0 Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.114625 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" event={"ID":"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3","Type":"ContainerDied","Data":"99b0845d1c96ecd36ed14e772d527fa10f416983d083bf1a071dc1f958a41e6a"} Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.116319 4580 generic.go:334] "Generic (PLEG): container finished" podID="eb8c503e-0907-40aa-a053-72d38311b08e" containerID="d1568fc031a9b4fb1a172f5e179c472c630fe35f7ea4b9f5de9760f78ffa00ce" exitCode=0 Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.116357 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4gk" event={"ID":"eb8c503e-0907-40aa-a053-72d38311b08e","Type":"ContainerDied","Data":"d1568fc031a9b4fb1a172f5e179c472c630fe35f7ea4b9f5de9760f78ffa00ce"} Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.118264 4580 generic.go:334] "Generic (PLEG): container finished" podID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerID="3b698a7623bceb5141f2ce989da151b74a925fbec398291710f61b21d7fcc8a9" exitCode=0 Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.118306 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbkw4" event={"ID":"0fb0ae3e-224b-4dba-8e3d-783df7049f05","Type":"ContainerDied","Data":"3b698a7623bceb5141f2ce989da151b74a925fbec398291710f61b21d7fcc8a9"} Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.119886 4580 generic.go:334] "Generic (PLEG): container finished" podID="86490260-47c3-47d2-beca-c61e661882ca" containerID="e15ac3e18763526d9f2f0cb9a4613cd4641cef85563dc907f42c500fae17360d" exitCode=0 Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.119911 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27x9v" event={"ID":"86490260-47c3-47d2-beca-c61e661882ca","Type":"ContainerDied","Data":"e15ac3e18763526d9f2f0cb9a4613cd4641cef85563dc907f42c500fae17360d"} Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.259624 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.363031 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwx9p\" (UniqueName: \"kubernetes.io/projected/eb8c503e-0907-40aa-a053-72d38311b08e-kube-api-access-mwx9p\") pod \"eb8c503e-0907-40aa-a053-72d38311b08e\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.363090 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-utilities\") pod \"eb8c503e-0907-40aa-a053-72d38311b08e\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.363145 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-catalog-content\") pod \"eb8c503e-0907-40aa-a053-72d38311b08e\" (UID: \"eb8c503e-0907-40aa-a053-72d38311b08e\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.363868 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-utilities" (OuterVolumeSpecName: "utilities") pod "eb8c503e-0907-40aa-a053-72d38311b08e" (UID: "eb8c503e-0907-40aa-a053-72d38311b08e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.376487 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8c503e-0907-40aa-a053-72d38311b08e-kube-api-access-mwx9p" (OuterVolumeSpecName: "kube-api-access-mwx9p") pod "eb8c503e-0907-40aa-a053-72d38311b08e" (UID: "eb8c503e-0907-40aa-a053-72d38311b08e"). InnerVolumeSpecName "kube-api-access-mwx9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.413805 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb8c503e-0907-40aa-a053-72d38311b08e" (UID: "eb8c503e-0907-40aa-a053-72d38311b08e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.464739 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwx9p\" (UniqueName: \"kubernetes.io/projected/eb8c503e-0907-40aa-a053-72d38311b08e-kube-api-access-mwx9p\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.464772 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.464782 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8c503e-0907-40aa-a053-72d38311b08e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.501412 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.539513 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.558817 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.560200 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.566003 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics\") pod \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.566151 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-utilities\") pod \"86490260-47c3-47d2-beca-c61e661882ca\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.566192 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca\") pod \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.566223 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84lhw\" (UniqueName: \"kubernetes.io/projected/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-kube-api-access-84lhw\") pod \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\" (UID: \"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.566242 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-catalog-content\") pod \"86490260-47c3-47d2-beca-c61e661882ca\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.566260 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45sjh\" (UniqueName: \"kubernetes.io/projected/86490260-47c3-47d2-beca-c61e661882ca-kube-api-access-45sjh\") pod \"86490260-47c3-47d2-beca-c61e661882ca\" (UID: \"86490260-47c3-47d2-beca-c61e661882ca\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.566848 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-utilities" (OuterVolumeSpecName: "utilities") pod "86490260-47c3-47d2-beca-c61e661882ca" (UID: "86490260-47c3-47d2-beca-c61e661882ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.566864 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" (UID: "170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.570154 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" (UID: "170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.571035 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-kube-api-access-84lhw" (OuterVolumeSpecName: "kube-api-access-84lhw") pod "170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" (UID: "170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3"). InnerVolumeSpecName "kube-api-access-84lhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.573389 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86490260-47c3-47d2-beca-c61e661882ca-kube-api-access-45sjh" (OuterVolumeSpecName: "kube-api-access-45sjh") pod "86490260-47c3-47d2-beca-c61e661882ca" (UID: "86490260-47c3-47d2-beca-c61e661882ca"). InnerVolumeSpecName "kube-api-access-45sjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.667547 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-catalog-content\") pod \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.667592 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-utilities\") pod \"10b26e31-b5d9-491a-863a-1cc0a102eae8\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.667640 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-catalog-content\") pod \"10b26e31-b5d9-491a-863a-1cc0a102eae8\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.667661 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klwrq\" (UniqueName: \"kubernetes.io/projected/10b26e31-b5d9-491a-863a-1cc0a102eae8-kube-api-access-klwrq\") pod \"10b26e31-b5d9-491a-863a-1cc0a102eae8\" (UID: \"10b26e31-b5d9-491a-863a-1cc0a102eae8\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.667800 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f9qk\" (UniqueName: \"kubernetes.io/projected/0fb0ae3e-224b-4dba-8e3d-783df7049f05-kube-api-access-5f9qk\") pod \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.667888 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-utilities\") pod \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\" (UID: \"0fb0ae3e-224b-4dba-8e3d-783df7049f05\") " Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.668201 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.668227 4580 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.668238 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84lhw\" (UniqueName: \"kubernetes.io/projected/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-kube-api-access-84lhw\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.668247 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45sjh\" (UniqueName: \"kubernetes.io/projected/86490260-47c3-47d2-beca-c61e661882ca-kube-api-access-45sjh\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.668261 4580 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.668872 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-utilities" (OuterVolumeSpecName: "utilities") pod "0fb0ae3e-224b-4dba-8e3d-783df7049f05" (UID: "0fb0ae3e-224b-4dba-8e3d-783df7049f05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.670195 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-utilities" (OuterVolumeSpecName: "utilities") pod "10b26e31-b5d9-491a-863a-1cc0a102eae8" (UID: "10b26e31-b5d9-491a-863a-1cc0a102eae8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.672435 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n599t"] Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.672784 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b26e31-b5d9-491a-863a-1cc0a102eae8-kube-api-access-klwrq" (OuterVolumeSpecName: "kube-api-access-klwrq") pod "10b26e31-b5d9-491a-863a-1cc0a102eae8" (UID: "10b26e31-b5d9-491a-863a-1cc0a102eae8"). InnerVolumeSpecName "kube-api-access-klwrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.673975 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb0ae3e-224b-4dba-8e3d-783df7049f05-kube-api-access-5f9qk" (OuterVolumeSpecName: "kube-api-access-5f9qk") pod "0fb0ae3e-224b-4dba-8e3d-783df7049f05" (UID: "0fb0ae3e-224b-4dba-8e3d-783df7049f05"). InnerVolumeSpecName "kube-api-access-5f9qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: W0112 13:10:28.675669 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53e207fa_a98f_4554_8ed8_67ffaa6e5955.slice/crio-06574a367ecda29ac5ecab958a692a238d2af55c297eaceb208e0fd57eaaffcc WatchSource:0}: Error finding container 06574a367ecda29ac5ecab958a692a238d2af55c297eaceb208e0fd57eaaffcc: Status 404 returned error can't find the container with id 06574a367ecda29ac5ecab958a692a238d2af55c297eaceb208e0fd57eaaffcc Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.694392 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86490260-47c3-47d2-beca-c61e661882ca" (UID: "86490260-47c3-47d2-beca-c61e661882ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.698932 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fb0ae3e-224b-4dba-8e3d-783df7049f05" (UID: "0fb0ae3e-224b-4dba-8e3d-783df7049f05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.721765 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10b26e31-b5d9-491a-863a-1cc0a102eae8" (UID: "10b26e31-b5d9-491a-863a-1cc0a102eae8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.769428 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f9qk\" (UniqueName: \"kubernetes.io/projected/0fb0ae3e-224b-4dba-8e3d-783df7049f05-kube-api-access-5f9qk\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.769749 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86490260-47c3-47d2-beca-c61e661882ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.769762 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.769794 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fb0ae3e-224b-4dba-8e3d-783df7049f05-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.769806 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.769815 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klwrq\" (UniqueName: \"kubernetes.io/projected/10b26e31-b5d9-491a-863a-1cc0a102eae8-kube-api-access-klwrq\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:28 crc kubenswrapper[4580]: I0112 13:10:28.769829 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b26e31-b5d9-491a-863a-1cc0a102eae8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.130307 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" event={"ID":"170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3","Type":"ContainerDied","Data":"220f5c0d687656afa3a771f386a79ca25a0f129f9c6c1dd92e51847fc60e37ee"} Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.130569 4580 scope.go:117] "RemoveContainer" containerID="99b0845d1c96ecd36ed14e772d527fa10f416983d083bf1a071dc1f958a41e6a" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.130352 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hlckg" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.132094 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n599t" event={"ID":"53e207fa-a98f-4554-8ed8-67ffaa6e5955","Type":"ContainerStarted","Data":"63164ef5a85a042ce74fd007c8f0d7bdf44780eea8a09bfd5538b7135dc60b8d"} Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.132184 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n599t" event={"ID":"53e207fa-a98f-4554-8ed8-67ffaa6e5955","Type":"ContainerStarted","Data":"06574a367ecda29ac5ecab958a692a238d2af55c297eaceb208e0fd57eaaffcc"} Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.132309 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.135876 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2l4gk" event={"ID":"eb8c503e-0907-40aa-a053-72d38311b08e","Type":"ContainerDied","Data":"0a62d7903743b02b9a1866ab21f4c9040a949e36aaa01a5b1a0db81e6f7a5b88"} Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.136003 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2l4gk" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.137823 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n599t" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.138351 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbkw4" event={"ID":"0fb0ae3e-224b-4dba-8e3d-783df7049f05","Type":"ContainerDied","Data":"08717937b198e2faa13c1edf55905e03895d8f396a82ef452d25eeb963b73c12"} Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.138485 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbkw4" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.144949 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27x9v" event={"ID":"86490260-47c3-47d2-beca-c61e661882ca","Type":"ContainerDied","Data":"942357b648f305e36a9a6ca4b04792356ce2b64f099c2f3aa5c2b85e2d3e5fd0"} Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.144975 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27x9v" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.149117 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r98fk" event={"ID":"10b26e31-b5d9-491a-863a-1cc0a102eae8","Type":"ContainerDied","Data":"d7a8f303b0c0f45ad972e668d5dbc0110cef01b3b1e5e1fe47afa9658bcfd71a"} Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.149209 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r98fk" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.157412 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-n599t" podStartSLOduration=2.15739881 podStartE2EDuration="2.15739881s" podCreationTimestamp="2026-01-12 13:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:10:29.156355639 +0000 UTC m=+228.200574329" watchObservedRunningTime="2026-01-12 13:10:29.15739881 +0000 UTC m=+228.201617500" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.163603 4580 scope.go:117] "RemoveContainer" containerID="d1568fc031a9b4fb1a172f5e179c472c630fe35f7ea4b9f5de9760f78ffa00ce" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.176577 4580 scope.go:117] "RemoveContainer" containerID="7eee35cf648208fcb5060695b329def2341070d699c51d5ff1fe7cd0d7144498" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.201713 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hlckg"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.206897 4580 scope.go:117] "RemoveContainer" containerID="1cf789010390759d1c5a8ec178ab42bbbefddf7b6afd81b30458cd658603797c" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.227158 4580 scope.go:117] "RemoveContainer" containerID="3b698a7623bceb5141f2ce989da151b74a925fbec398291710f61b21d7fcc8a9" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.227246 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hlckg"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.230512 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27x9v"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.233280 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-27x9v"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.238918 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2l4gk"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.242253 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2l4gk"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.242402 4580 scope.go:117] "RemoveContainer" containerID="35dc66437144d6ef207df57ffee28160c2e9827877ae57c9271d90435a4efff8" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.247371 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbkw4"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.251565 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbkw4"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.254749 4580 scope.go:117] "RemoveContainer" containerID="0985df62124517b54fa54c1aecf0ed97bb6b2efcf7d5fdcfd0292831afc5e65a" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.254755 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r98fk"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.258194 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r98fk"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.267118 4580 scope.go:117] "RemoveContainer" containerID="e15ac3e18763526d9f2f0cb9a4613cd4641cef85563dc907f42c500fae17360d" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.277919 4580 scope.go:117] "RemoveContainer" containerID="d189db6514b0838d1a80d278fbac64cbc8379dfd984f6b8e31b9598e680fe7e0" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.286852 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" path="/var/lib/kubelet/pods/0fb0ae3e-224b-4dba-8e3d-783df7049f05/volumes" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.287510 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" path="/var/lib/kubelet/pods/10b26e31-b5d9-491a-863a-1cc0a102eae8/volumes" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.288217 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" path="/var/lib/kubelet/pods/170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3/volumes" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.289168 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86490260-47c3-47d2-beca-c61e661882ca" path="/var/lib/kubelet/pods/86490260-47c3-47d2-beca-c61e661882ca/volumes" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.289672 4580 scope.go:117] "RemoveContainer" containerID="37162311171ec70deba7472701769b25ac30bf8be23909920c28ac7b0cc39579" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.289877 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" path="/var/lib/kubelet/pods/eb8c503e-0907-40aa-a053-72d38311b08e/volumes" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.305035 4580 scope.go:117] "RemoveContainer" containerID="f4c3673086bf4a7cfae5c40af384963e637179db320a8e40b9bc0a1d303a0fc7" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.318943 4580 scope.go:117] "RemoveContainer" containerID="077825a94a013c7a66ae49697fa90d3d5cdfdeb9df58f5f69fa07b8d0d2e9338" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.333554 4580 scope.go:117] "RemoveContainer" containerID="6d9d157ced800fda22147f0e28254351ac60fe58e22a7235cafea546d96842f8" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950383 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5lwr"] Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950661 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950678 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950695 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950702 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950714 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerName="extract-content" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950721 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerName="extract-content" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950727 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86490260-47c3-47d2-beca-c61e661882ca" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950733 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="86490260-47c3-47d2-beca-c61e661882ca" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950741 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" containerName="extract-content" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950747 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" containerName="extract-content" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950757 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerName="extract-content" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950763 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerName="extract-content" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950774 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerName="extract-utilities" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950780 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerName="extract-utilities" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950789 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerName="extract-utilities" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950795 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerName="extract-utilities" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950804 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" containerName="marketplace-operator" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950810 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" containerName="marketplace-operator" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950822 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86490260-47c3-47d2-beca-c61e661882ca" containerName="extract-content" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950828 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="86490260-47c3-47d2-beca-c61e661882ca" containerName="extract-content" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950836 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" containerName="extract-utilities" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950842 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" containerName="extract-utilities" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950851 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86490260-47c3-47d2-beca-c61e661882ca" containerName="extract-utilities" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950858 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="86490260-47c3-47d2-beca-c61e661882ca" containerName="extract-utilities" Jan 12 13:10:29 crc kubenswrapper[4580]: E0112 13:10:29.950865 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950871 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950986 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb0ae3e-224b-4dba-8e3d-783df7049f05" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.950999 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b26e31-b5d9-491a-863a-1cc0a102eae8" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.951010 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="170f7f91-2fd3-49a9-a31e-5d5c8ae98cd3" containerName="marketplace-operator" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.951018 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="86490260-47c3-47d2-beca-c61e661882ca" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.951027 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8c503e-0907-40aa-a053-72d38311b08e" containerName="registry-server" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.951933 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.956319 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.976160 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5lwr"] Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.988797 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5677e888-c379-4713-bcf6-e2e31288a0b6-catalog-content\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.988853 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw9mv\" (UniqueName: \"kubernetes.io/projected/5677e888-c379-4713-bcf6-e2e31288a0b6-kube-api-access-kw9mv\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:29 crc kubenswrapper[4580]: I0112 13:10:29.988925 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5677e888-c379-4713-bcf6-e2e31288a0b6-utilities\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.090005 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw9mv\" (UniqueName: \"kubernetes.io/projected/5677e888-c379-4713-bcf6-e2e31288a0b6-kube-api-access-kw9mv\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.090299 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5677e888-c379-4713-bcf6-e2e31288a0b6-utilities\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.090388 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5677e888-c379-4713-bcf6-e2e31288a0b6-catalog-content\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.090715 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5677e888-c379-4713-bcf6-e2e31288a0b6-utilities\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.091453 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5677e888-c379-4713-bcf6-e2e31288a0b6-catalog-content\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.106413 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw9mv\" (UniqueName: \"kubernetes.io/projected/5677e888-c379-4713-bcf6-e2e31288a0b6-kube-api-access-kw9mv\") pod \"redhat-marketplace-w5lwr\" (UID: \"5677e888-c379-4713-bcf6-e2e31288a0b6\") " pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.151124 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ts2q4"] Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.153774 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.155561 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.162213 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ts2q4"] Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.191424 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae20335-c7d3-46ef-84e6-129bc0550ab4-utilities\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.191542 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae20335-c7d3-46ef-84e6-129bc0550ab4-catalog-content\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.191588 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fcd\" (UniqueName: \"kubernetes.io/projected/2ae20335-c7d3-46ef-84e6-129bc0550ab4-kube-api-access-75fcd\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.278550 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.293427 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae20335-c7d3-46ef-84e6-129bc0550ab4-catalog-content\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.293543 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75fcd\" (UniqueName: \"kubernetes.io/projected/2ae20335-c7d3-46ef-84e6-129bc0550ab4-kube-api-access-75fcd\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.293639 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae20335-c7d3-46ef-84e6-129bc0550ab4-utilities\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.294248 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae20335-c7d3-46ef-84e6-129bc0550ab4-utilities\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.294725 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae20335-c7d3-46ef-84e6-129bc0550ab4-catalog-content\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.310866 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75fcd\" (UniqueName: \"kubernetes.io/projected/2ae20335-c7d3-46ef-84e6-129bc0550ab4-kube-api-access-75fcd\") pod \"redhat-operators-ts2q4\" (UID: \"2ae20335-c7d3-46ef-84e6-129bc0550ab4\") " pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.468309 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.669883 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5lwr"] Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.838886 4580 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.839596 4580 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.839778 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.839887 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28" gracePeriod=15 Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.839920 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192" gracePeriod=15 Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.839946 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005" gracePeriod=15 Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.840027 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b" gracePeriod=15 Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.840174 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834" gracePeriod=15 Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841084 4580 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 12 13:10:30 crc kubenswrapper[4580]: E0112 13:10:30.841351 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841381 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 12 13:10:30 crc kubenswrapper[4580]: E0112 13:10:30.841391 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841397 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 12 13:10:30 crc kubenswrapper[4580]: E0112 13:10:30.841406 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841412 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 12 13:10:30 crc kubenswrapper[4580]: E0112 13:10:30.841420 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841425 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 12 13:10:30 crc kubenswrapper[4580]: E0112 13:10:30.841432 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841437 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 12 13:10:30 crc kubenswrapper[4580]: E0112 13:10:30.841445 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841451 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 12 13:10:30 crc kubenswrapper[4580]: E0112 13:10:30.841460 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841465 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841547 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841557 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841565 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841574 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841583 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.841757 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.877197 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ts2q4"] Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.883890 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 12 13:10:30 crc kubenswrapper[4580]: W0112 13:10:30.884415 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ae20335_c7d3_46ef_84e6_129bc0550ab4.slice/crio-e1f214bba078821d9d28f76f3740cf6e7087c389799eabb2f2ca193c4f4f3c4f WatchSource:0}: Error finding container e1f214bba078821d9d28f76f3740cf6e7087c389799eabb2f2ca193c4f4f3c4f: Status 404 returned error can't find the container with id e1f214bba078821d9d28f76f3740cf6e7087c389799eabb2f2ca193c4f4f3c4f Jan 12 13:10:30 crc kubenswrapper[4580]: E0112 13:10:30.887910 4580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 192.168.25.161:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-ts2q4.1889fdd7898ddcb0 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ts2q4,UID:2ae20335-c7d3-46ef-84e6-129bc0550ab4,APIVersion:v1,ResourceVersion:29874,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-12 13:10:30.887201968 +0000 UTC m=+229.931420658,LastTimestamp:2026-01-12 13:10:30.887201968 +0000 UTC m=+229.931420658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.904496 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.904779 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.904823 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.904892 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.904916 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.904951 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.904987 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:30 crc kubenswrapper[4580]: I0112 13:10:30.905024 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.006838 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.006909 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.006959 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.006978 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.006999 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007043 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007090 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007132 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007234 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007290 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007315 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007355 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007378 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007401 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007447 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.007471 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.174274 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.175277 4580 generic.go:334] "Generic (PLEG): container finished" podID="5677e888-c379-4713-bcf6-e2e31288a0b6" containerID="cbd68f4f28e159ed5b93729fecdd968704b980bbae596c9b0888e42871bafa18" exitCode=0 Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.175357 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5lwr" event={"ID":"5677e888-c379-4713-bcf6-e2e31288a0b6","Type":"ContainerDied","Data":"cbd68f4f28e159ed5b93729fecdd968704b980bbae596c9b0888e42871bafa18"} Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.175397 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5lwr" event={"ID":"5677e888-c379-4713-bcf6-e2e31288a0b6","Type":"ContainerStarted","Data":"1c08fa464b26d4154c7f53e68b7a46c7526dd1c28ff4cdf61bc8e106d66c2c18"} Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.176011 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.176428 4580 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.176974 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.177355 4580 generic.go:334] "Generic (PLEG): container finished" podID="30474fce-6a17-410e-8b71-7fc76ae2835c" containerID="ba347f93ac94b4a145c59c1cd7b42dd1438f91eef226679ce0e0a2f5c6bad40f" exitCode=0 Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.177430 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"30474fce-6a17-410e-8b71-7fc76ae2835c","Type":"ContainerDied","Data":"ba347f93ac94b4a145c59c1cd7b42dd1438f91eef226679ce0e0a2f5c6bad40f"} Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.177938 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.178178 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.178488 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.179092 4580 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.180395 4580 generic.go:334] "Generic (PLEG): container finished" podID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" containerID="e93ac859d75f99195b4add9863da653eaa63701acba8623e5a7c760c2ed50815" exitCode=0 Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.180448 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts2q4" event={"ID":"2ae20335-c7d3-46ef-84e6-129bc0550ab4","Type":"ContainerDied","Data":"e93ac859d75f99195b4add9863da653eaa63701acba8623e5a7c760c2ed50815"} Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.180528 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts2q4" event={"ID":"2ae20335-c7d3-46ef-84e6-129bc0550ab4","Type":"ContainerStarted","Data":"e1f214bba078821d9d28f76f3740cf6e7087c389799eabb2f2ca193c4f4f3c4f"} Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.181241 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.181513 4580 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.181815 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.182048 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.182300 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.183313 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.184858 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.186185 4580 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28" exitCode=0 Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.186207 4580 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192" exitCode=0 Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.186216 4580 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005" exitCode=0 Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.186226 4580 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b" exitCode=2 Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.186263 4580 scope.go:117] "RemoveContainer" containerID="e0c7ac25add51f8a9be790b9d47bc39155d83c4da0f3b241897d1395686feb68" Jan 12 13:10:31 crc kubenswrapper[4580]: W0112 13:10:31.200022 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-de84252d5d0f6843e8bd0b96cc9eee3c454fdd7d01eb3506505bcf6febe64183 WatchSource:0}: Error finding container de84252d5d0f6843e8bd0b96cc9eee3c454fdd7d01eb3506505bcf6febe64183: Status 404 returned error can't find the container with id de84252d5d0f6843e8bd0b96cc9eee3c454fdd7d01eb3506505bcf6febe64183 Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.284194 4580 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.284704 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.285033 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.285407 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:31 crc kubenswrapper[4580]: I0112 13:10:31.285755 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.191930 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5lwr" event={"ID":"5677e888-c379-4713-bcf6-e2e31288a0b6","Type":"ContainerStarted","Data":"a0530cd457d3633d7048fcd5b0993fcb017e61300ab583f1df459e22a23511f4"} Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.192553 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.192856 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.193205 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.193387 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.193615 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"68cc4778f75e5b065cf4ab585320998dc2550993379185138ae04773a7017099"} Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.193657 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"de84252d5d0f6843e8bd0b96cc9eee3c454fdd7d01eb3506505bcf6febe64183"} Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.194139 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.194324 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.194515 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.194689 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.196340 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.535300 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.536321 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.536697 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.537017 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.537344 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.625774 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30474fce-6a17-410e-8b71-7fc76ae2835c-kube-api-access\") pod \"30474fce-6a17-410e-8b71-7fc76ae2835c\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.625820 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-var-lock\") pod \"30474fce-6a17-410e-8b71-7fc76ae2835c\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.625836 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-kubelet-dir\") pod \"30474fce-6a17-410e-8b71-7fc76ae2835c\" (UID: \"30474fce-6a17-410e-8b71-7fc76ae2835c\") " Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.625893 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "30474fce-6a17-410e-8b71-7fc76ae2835c" (UID: "30474fce-6a17-410e-8b71-7fc76ae2835c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.625911 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-var-lock" (OuterVolumeSpecName: "var-lock") pod "30474fce-6a17-410e-8b71-7fc76ae2835c" (UID: "30474fce-6a17-410e-8b71-7fc76ae2835c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.626222 4580 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-var-lock\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.626245 4580 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30474fce-6a17-410e-8b71-7fc76ae2835c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.630695 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30474fce-6a17-410e-8b71-7fc76ae2835c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "30474fce-6a17-410e-8b71-7fc76ae2835c" (UID: "30474fce-6a17-410e-8b71-7fc76ae2835c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:10:32 crc kubenswrapper[4580]: I0112 13:10:32.727822 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30474fce-6a17-410e-8b71-7fc76ae2835c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.208795 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.210640 4580 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834" exitCode=0 Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.210690 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b566313717176796b8edbe68d8ff3cfe27d0209c33daaff44adb6838581bc826" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.212358 4580 generic.go:334] "Generic (PLEG): container finished" podID="5677e888-c379-4713-bcf6-e2e31288a0b6" containerID="a0530cd457d3633d7048fcd5b0993fcb017e61300ab583f1df459e22a23511f4" exitCode=0 Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.212465 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5lwr" event={"ID":"5677e888-c379-4713-bcf6-e2e31288a0b6","Type":"ContainerDied","Data":"a0530cd457d3633d7048fcd5b0993fcb017e61300ab583f1df459e22a23511f4"} Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.212932 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.213335 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.213862 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.214210 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.214640 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"30474fce-6a17-410e-8b71-7fc76ae2835c","Type":"ContainerDied","Data":"ae0caf23be6ed8470fa648c2238eca3a1b66a5da2587575a9646322965d36f47"} Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.214665 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.214672 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae0caf23be6ed8470fa648c2238eca3a1b66a5da2587575a9646322965d36f47" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.217668 4580 generic.go:334] "Generic (PLEG): container finished" podID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" containerID="92f139d91cc16412941e42ef5c0f66fae1a378de1d2efe8bb08d85c693d83b4b" exitCode=0 Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.217971 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts2q4" event={"ID":"2ae20335-c7d3-46ef-84e6-129bc0550ab4","Type":"ContainerDied","Data":"92f139d91cc16412941e42ef5c0f66fae1a378de1d2efe8bb08d85c693d83b4b"} Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.218601 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.218818 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.219012 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.219228 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.260804 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.261083 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.261517 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.261803 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.263684 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.264299 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.264721 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.265013 4580 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.265902 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.266178 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.266413 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.336643 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.336722 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.336798 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.337608 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.337660 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.337683 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.438484 4580 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.438513 4580 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:33 crc kubenswrapper[4580]: I0112 13:10:33.438524 4580 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.226413 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5lwr" event={"ID":"5677e888-c379-4713-bcf6-e2e31288a0b6","Type":"ContainerStarted","Data":"5ed365bb666226f81ea1de04bc46cbae2fe5f83cf9298f5eedea8a2ac5be7a9c"} Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.226517 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.227703 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.227917 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.228225 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.228650 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.229008 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts2q4" event={"ID":"2ae20335-c7d3-46ef-84e6-129bc0550ab4","Type":"ContainerStarted","Data":"3b146035e08049f7a7826ac40849b198b6562667713958874c87d348b050ff61"} Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.229457 4580 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.229854 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.230166 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.230480 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.230709 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.231453 4580 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.231687 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.231934 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.232212 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.232458 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.239599 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.239846 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.240080 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.240322 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: I0112 13:10:34.240584 4580 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:34 crc kubenswrapper[4580]: E0112 13:10:34.316144 4580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 192.168.25.161:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-ts2q4.1889fdd7898ddcb0 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ts2q4,UID:2ae20335-c7d3-46ef-84e6-129bc0550ab4,APIVersion:v1,ResourceVersion:29874,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-12 13:10:30.887201968 +0000 UTC m=+229.931420658,LastTimestamp:2026-01-12 13:10:30.887201968 +0000 UTC m=+229.931420658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 12 13:10:35 crc kubenswrapper[4580]: I0112 13:10:35.287220 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 12 13:10:35 crc kubenswrapper[4580]: E0112 13:10:35.750946 4580 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:35 crc kubenswrapper[4580]: E0112 13:10:35.751444 4580 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:35 crc kubenswrapper[4580]: E0112 13:10:35.752081 4580 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:35 crc kubenswrapper[4580]: E0112 13:10:35.752402 4580 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:35 crc kubenswrapper[4580]: E0112 13:10:35.752689 4580 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:35 crc kubenswrapper[4580]: I0112 13:10:35.752730 4580 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 12 13:10:35 crc kubenswrapper[4580]: E0112 13:10:35.753001 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="200ms" Jan 12 13:10:35 crc kubenswrapper[4580]: E0112 13:10:35.953753 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="400ms" Jan 12 13:10:36 crc kubenswrapper[4580]: E0112 13:10:36.355044 4580 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 192.168.25.161:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" volumeName="registry-storage" Jan 12 13:10:36 crc kubenswrapper[4580]: E0112 13:10:36.355245 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="800ms" Jan 12 13:10:37 crc kubenswrapper[4580]: E0112 13:10:37.156715 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="1.6s" Jan 12 13:10:38 crc kubenswrapper[4580]: E0112 13:10:38.757959 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="3.2s" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.279382 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.280556 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.312893 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.313340 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.313667 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.314249 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.314486 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.468897 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.468962 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.516519 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.516959 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.517325 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.517745 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:40 crc kubenswrapper[4580]: I0112 13:10:40.518034 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.286091 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.286761 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.287010 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.287218 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.309419 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5lwr" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.309780 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.310136 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.310309 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.310461 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.310493 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ts2q4" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.310694 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.310854 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.311027 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: I0112 13:10:41.311674 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:41 crc kubenswrapper[4580]: E0112 13:10:41.958878 4580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.161:6443: connect: connection refused" interval="6.4s" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.281385 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.282673 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.283113 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.284526 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.284829 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.292340 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.292396 4580 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880" exitCode=1 Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.292491 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880"} Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.292926 4580 scope.go:117] "RemoveContainer" containerID="bc8b55ba464a72a72e6361e6847c4e8c8b27f317e8eba5d95923fbaf62589880" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.293192 4580 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.293232 4580 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.293171 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.293605 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: E0112 13:10:43.293652 4580 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.293808 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.294011 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.294063 4580 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.294405 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:43 crc kubenswrapper[4580]: W0112 13:10:43.316180 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-c1c67a42a934faf9e0bd26ae5eea269a791bc2b26e361e42da16211741e02bc5 WatchSource:0}: Error finding container c1c67a42a934faf9e0bd26ae5eea269a791bc2b26e361e42da16211741e02bc5: Status 404 returned error can't find the container with id c1c67a42a934faf9e0bd26ae5eea269a791bc2b26e361e42da16211741e02bc5 Jan 12 13:10:43 crc kubenswrapper[4580]: I0112 13:10:43.343978 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.301157 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.301239 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2a20a71bdc4b249a40093eae9c0a0be8f0927548611e2c886cc64e78a7979c89"} Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.302131 4580 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.302368 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.302482 4580 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a98ae6934e2bc7bd22b189cbc09f37ce8791d8673821fc1958b2b0117faf127c" exitCode=0 Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.302517 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a98ae6934e2bc7bd22b189cbc09f37ce8791d8673821fc1958b2b0117faf127c"} Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.302563 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c1c67a42a934faf9e0bd26ae5eea269a791bc2b26e361e42da16211741e02bc5"} Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.302613 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.302761 4580 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.302778 4580 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:44 crc kubenswrapper[4580]: E0112 13:10:44.303018 4580 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.303010 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.303525 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.303834 4580 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.306137 4580 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.307661 4580 status_manager.go:851] "Failed to get status for pod" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.307924 4580 status_manager.go:851] "Failed to get status for pod" podUID="5677e888-c379-4713-bcf6-e2e31288a0b6" pod="openshift-marketplace/redhat-marketplace-w5lwr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w5lwr\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: I0112 13:10:44.308212 4580 status_manager.go:851] "Failed to get status for pod" podUID="2ae20335-c7d3-46ef-84e6-129bc0550ab4" pod="openshift-marketplace/redhat-operators-ts2q4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ts2q4\": dial tcp 192.168.25.161:6443: connect: connection refused" Jan 12 13:10:44 crc kubenswrapper[4580]: E0112 13:10:44.317554 4580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 192.168.25.161:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-ts2q4.1889fdd7898ddcb0 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ts2q4,UID:2ae20335-c7d3-46ef-84e6-129bc0550ab4,APIVersion:v1,ResourceVersion:29874,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-12 13:10:30.887201968 +0000 UTC m=+229.931420658,LastTimestamp:2026-01-12 13:10:30.887201968 +0000 UTC m=+229.931420658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 12 13:10:45 crc kubenswrapper[4580]: I0112 13:10:45.309876 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"81c730d8e70f31d9578f11b8572528a15f120f7fb1512264507e107c489bdffa"} Jan 12 13:10:45 crc kubenswrapper[4580]: I0112 13:10:45.310269 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5c6a078c1d74695550679d78bd409c3d5ebaef063bd5a250a7e96b6ad6eea43f"} Jan 12 13:10:45 crc kubenswrapper[4580]: I0112 13:10:45.310284 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b05d7b0f111daafba983ea2bb4dfa044721ae069e75fcf165fafe041cce62f6"} Jan 12 13:10:45 crc kubenswrapper[4580]: I0112 13:10:45.310295 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe586e9a2985ccdc2509162d60821306a24e7c9420a5ea7fd489f8ceed6ab3c6"} Jan 12 13:10:45 crc kubenswrapper[4580]: I0112 13:10:45.310306 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"98b656e4984c8a49af5747fbafedd82e46f25f04225630d9764c7eb7b4a709fb"} Jan 12 13:10:45 crc kubenswrapper[4580]: I0112 13:10:45.310589 4580 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:45 crc kubenswrapper[4580]: I0112 13:10:45.310604 4580 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:45 crc kubenswrapper[4580]: I0112 13:10:45.310924 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:48 crc kubenswrapper[4580]: I0112 13:10:48.294867 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:48 crc kubenswrapper[4580]: I0112 13:10:48.294920 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:48 crc kubenswrapper[4580]: I0112 13:10:48.300156 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:50 crc kubenswrapper[4580]: I0112 13:10:50.656135 4580 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:51 crc kubenswrapper[4580]: I0112 13:10:51.230999 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:10:51 crc kubenswrapper[4580]: I0112 13:10:51.296534 4580 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f11e530a-ee90-465b-bc7d-3f9874ea552d" Jan 12 13:10:51 crc kubenswrapper[4580]: I0112 13:10:51.339460 4580 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:51 crc kubenswrapper[4580]: I0112 13:10:51.339498 4580 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:51 crc kubenswrapper[4580]: I0112 13:10:51.342586 4580 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f11e530a-ee90-465b-bc7d-3f9874ea552d" Jan 12 13:10:51 crc kubenswrapper[4580]: I0112 13:10:51.343506 4580 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://98b656e4984c8a49af5747fbafedd82e46f25f04225630d9764c7eb7b4a709fb" Jan 12 13:10:51 crc kubenswrapper[4580]: I0112 13:10:51.343612 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:10:52 crc kubenswrapper[4580]: I0112 13:10:52.344189 4580 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:52 crc kubenswrapper[4580]: I0112 13:10:52.344528 4580 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c9730289-8e50-4a9a-b474-db6c268d5a30" Jan 12 13:10:52 crc kubenswrapper[4580]: I0112 13:10:52.346991 4580 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f11e530a-ee90-465b-bc7d-3f9874ea552d" Jan 12 13:10:53 crc kubenswrapper[4580]: I0112 13:10:53.343139 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:10:53 crc kubenswrapper[4580]: I0112 13:10:53.347811 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:10:54 crc kubenswrapper[4580]: I0112 13:10:54.361004 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 12 13:11:00 crc kubenswrapper[4580]: I0112 13:11:00.740086 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 12 13:11:01 crc kubenswrapper[4580]: I0112 13:11:01.113348 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 12 13:11:01 crc kubenswrapper[4580]: I0112 13:11:01.567876 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 12 13:11:01 crc kubenswrapper[4580]: I0112 13:11:01.796505 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 12 13:11:01 crc kubenswrapper[4580]: I0112 13:11:01.806675 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 12 13:11:01 crc kubenswrapper[4580]: I0112 13:11:01.892847 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 12 13:11:01 crc kubenswrapper[4580]: I0112 13:11:01.988826 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 12 13:11:02 crc kubenswrapper[4580]: I0112 13:11:02.015087 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 12 13:11:02 crc kubenswrapper[4580]: I0112 13:11:02.158806 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 12 13:11:02 crc kubenswrapper[4580]: I0112 13:11:02.530667 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 12 13:11:02 crc kubenswrapper[4580]: I0112 13:11:02.540163 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 12 13:11:02 crc kubenswrapper[4580]: I0112 13:11:02.553784 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 12 13:11:02 crc kubenswrapper[4580]: I0112 13:11:02.623395 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 12 13:11:03 crc kubenswrapper[4580]: I0112 13:11:03.101255 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 12 13:11:03 crc kubenswrapper[4580]: I0112 13:11:03.198521 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 12 13:11:03 crc kubenswrapper[4580]: I0112 13:11:03.329586 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 12 13:11:03 crc kubenswrapper[4580]: I0112 13:11:03.351527 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 12 13:11:03 crc kubenswrapper[4580]: I0112 13:11:03.614347 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 12 13:11:03 crc kubenswrapper[4580]: I0112 13:11:03.621584 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.035829 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.141729 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.168611 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.169452 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.242151 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.268902 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.328247 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.435720 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.437005 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.519436 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.555619 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.559449 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.604406 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.704771 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.707853 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.739603 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.740362 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.839708 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.852344 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 12 13:11:04 crc kubenswrapper[4580]: I0112 13:11:04.904573 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.038420 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.085369 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.175630 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.188218 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.281205 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.475928 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.483342 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.775615 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.894755 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 12 13:11:05 crc kubenswrapper[4580]: I0112 13:11:05.924919 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.008520 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.096239 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.250842 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.272806 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.330839 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.331946 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.389061 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.512906 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.513664 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.553070 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.623640 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.632379 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.642695 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.766517 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.866041 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.878500 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.879398 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.922001 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 12 13:11:06 crc kubenswrapper[4580]: I0112 13:11:06.996268 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.003672 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.009935 4580 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.145688 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.248064 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.262855 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.277172 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.363432 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.459518 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.580559 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.700769 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.727855 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.745749 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.804827 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.924449 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 12 13:11:07 crc kubenswrapper[4580]: I0112 13:11:07.971900 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.002978 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.141003 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.186927 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.218564 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.222449 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.237497 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.261888 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.287131 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.308361 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.312242 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.322895 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.330575 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.365148 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.378898 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.427969 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.449896 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.497891 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.498549 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.627430 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.685339 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.764210 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.773033 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 12 13:11:08 crc kubenswrapper[4580]: I0112 13:11:08.910541 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.091367 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.101617 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.160846 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.204352 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.284521 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.288510 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.343413 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.366639 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.374847 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.390713 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.439901 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.653386 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.680026 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.694095 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.734339 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.773900 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.824774 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.924906 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.928708 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 12 13:11:09 crc kubenswrapper[4580]: I0112 13:11:09.982689 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.001111 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.022789 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.022861 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.224765 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.232798 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.239512 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.240532 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.270625 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.309040 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.312245 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.364070 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.452365 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.460165 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.506522 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.514297 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.530801 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.546405 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.653423 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.719887 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.737500 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.911959 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.970935 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 12 13:11:10 crc kubenswrapper[4580]: I0112 13:11:10.984874 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.048435 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.079356 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.121099 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.180517 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.198114 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.222245 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.228674 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.268537 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.286722 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.301275 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.304374 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.455432 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.463271 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.512043 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.512793 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.541137 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.583677 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.589889 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.590593 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.800201 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.841249 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.944646 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.977182 4580 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.978607 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.978592603 podStartE2EDuration="41.978592603s" podCreationTimestamp="2026-01-12 13:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:10:50.730954299 +0000 UTC m=+249.775172989" watchObservedRunningTime="2026-01-12 13:11:11.978592603 +0000 UTC m=+271.022811293" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.978731 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5lwr" podStartSLOduration=40.457993351 podStartE2EDuration="42.978726925s" podCreationTimestamp="2026-01-12 13:10:29 +0000 UTC" firstStartedPulling="2026-01-12 13:10:31.183004114 +0000 UTC m=+230.227222805" lastFinishedPulling="2026-01-12 13:10:33.703737688 +0000 UTC m=+232.747956379" observedRunningTime="2026-01-12 13:10:50.692879817 +0000 UTC m=+249.737098507" watchObservedRunningTime="2026-01-12 13:11:11.978726925 +0000 UTC m=+271.022945616" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.980533 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ts2q4" podStartSLOduration=39.353913519 podStartE2EDuration="41.980527542s" podCreationTimestamp="2026-01-12 13:10:30 +0000 UTC" firstStartedPulling="2026-01-12 13:10:31.183258302 +0000 UTC m=+230.227476992" lastFinishedPulling="2026-01-12 13:10:33.809872325 +0000 UTC m=+232.854091015" observedRunningTime="2026-01-12 13:10:50.707836488 +0000 UTC m=+249.752055178" watchObservedRunningTime="2026-01-12 13:11:11.980527542 +0000 UTC m=+271.024746231" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.980835 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.980876 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.984314 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.993599 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 12 13:11:11 crc kubenswrapper[4580]: I0112 13:11:11.993884 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.993868548000002 podStartE2EDuration="21.993868548s" podCreationTimestamp="2026-01-12 13:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:11:11.99266311 +0000 UTC m=+271.036881801" watchObservedRunningTime="2026-01-12 13:11:11.993868548 +0000 UTC m=+271.038087238" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.052162 4580 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.052176 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.052374 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://68cc4778f75e5b065cf4ab585320998dc2550993379185138ae04773a7017099" gracePeriod=5 Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.125721 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.205231 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.320842 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.407598 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.501831 4580 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.534018 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.653738 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.810343 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.844449 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.885486 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.923560 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.940817 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 12 13:11:12 crc kubenswrapper[4580]: I0112 13:11:12.940877 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.078169 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.228468 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.292152 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.309228 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.375909 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.395271 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.545597 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.640083 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.762227 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.767934 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.802676 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.822940 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.876930 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.921009 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.966561 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 12 13:11:13 crc kubenswrapper[4580]: I0112 13:11:13.981440 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.312097 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.340462 4580 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.346616 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.355692 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.371677 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.381125 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.381832 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.455824 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.487116 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.540314 4580 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.586195 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.634391 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.642328 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.779065 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 12 13:11:14 crc kubenswrapper[4580]: I0112 13:11:14.799904 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.074532 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.078590 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.169852 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.199833 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.223819 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.246334 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.315814 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.493175 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.655254 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.683969 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.685752 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.847346 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.863154 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 12 13:11:15 crc kubenswrapper[4580]: I0112 13:11:15.916707 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.085787 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.181337 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.444936 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.478973 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.480084 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.505160 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.651148 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.652368 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.748431 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.903429 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 12 13:11:16 crc kubenswrapper[4580]: I0112 13:11:16.954906 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.062154 4580 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.086521 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.197043 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.467653 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.467715 4580 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="68cc4778f75e5b065cf4ab585320998dc2550993379185138ae04773a7017099" exitCode=137 Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.554652 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.595405 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.603645 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.603722 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750405 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750463 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750485 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750507 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750516 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750542 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750563 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750571 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750570 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750751 4580 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750762 4580 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750771 4580 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.750779 4580 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.758593 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:11:17 crc kubenswrapper[4580]: I0112 13:11:17.851723 4580 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 12 13:11:18 crc kubenswrapper[4580]: I0112 13:11:18.476695 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 12 13:11:18 crc kubenswrapper[4580]: I0112 13:11:18.476793 4580 scope.go:117] "RemoveContainer" containerID="68cc4778f75e5b065cf4ab585320998dc2550993379185138ae04773a7017099" Jan 12 13:11:18 crc kubenswrapper[4580]: I0112 13:11:18.476840 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 12 13:11:18 crc kubenswrapper[4580]: I0112 13:11:18.671523 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 12 13:11:19 crc kubenswrapper[4580]: I0112 13:11:19.288338 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 12 13:11:19 crc kubenswrapper[4580]: I0112 13:11:19.288594 4580 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 12 13:11:19 crc kubenswrapper[4580]: I0112 13:11:19.297834 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 12 13:11:19 crc kubenswrapper[4580]: I0112 13:11:19.297863 4580 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0b90b24e-a98f-48ba-9c09-808ae4ccdc17" Jan 12 13:11:19 crc kubenswrapper[4580]: I0112 13:11:19.300163 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 12 13:11:19 crc kubenswrapper[4580]: I0112 13:11:19.300208 4580 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0b90b24e-a98f-48ba-9c09-808ae4ccdc17" Jan 12 13:11:20 crc kubenswrapper[4580]: I0112 13:11:20.044421 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 12 13:11:41 crc kubenswrapper[4580]: I0112 13:11:41.164411 4580 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.205943 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-89lkz"] Jan 12 13:11:48 crc kubenswrapper[4580]: E0112 13:11:48.206623 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.206637 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 12 13:11:48 crc kubenswrapper[4580]: E0112 13:11:48.206655 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" containerName="installer" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.206662 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" containerName="installer" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.206771 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.206785 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="30474fce-6a17-410e-8b71-7fc76ae2835c" containerName="installer" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.207445 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.211303 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.216741 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89lkz"] Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.362160 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d72a58-4072-4c37-95c8-b4668060c64c-utilities\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.362497 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l889q\" (UniqueName: \"kubernetes.io/projected/45d72a58-4072-4c37-95c8-b4668060c64c-kube-api-access-l889q\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.362575 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d72a58-4072-4c37-95c8-b4668060c64c-catalog-content\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.401279 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rcbw9"] Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.403840 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.412645 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.421674 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcbw9"] Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.463619 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l889q\" (UniqueName: \"kubernetes.io/projected/45d72a58-4072-4c37-95c8-b4668060c64c-kube-api-access-l889q\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.463723 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d72a58-4072-4c37-95c8-b4668060c64c-catalog-content\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.463751 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d72a58-4072-4c37-95c8-b4668060c64c-utilities\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.464361 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d72a58-4072-4c37-95c8-b4668060c64c-catalog-content\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.464616 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d72a58-4072-4c37-95c8-b4668060c64c-utilities\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.480139 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l889q\" (UniqueName: \"kubernetes.io/projected/45d72a58-4072-4c37-95c8-b4668060c64c-kube-api-access-l889q\") pod \"certified-operators-89lkz\" (UID: \"45d72a58-4072-4c37-95c8-b4668060c64c\") " pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.525182 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.565183 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52363f5a-5d4c-406b-bd57-cbde5f393c2c-utilities\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.565223 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc4ph\" (UniqueName: \"kubernetes.io/projected/52363f5a-5d4c-406b-bd57-cbde5f393c2c-kube-api-access-xc4ph\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.565280 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52363f5a-5d4c-406b-bd57-cbde5f393c2c-catalog-content\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.666438 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52363f5a-5d4c-406b-bd57-cbde5f393c2c-catalog-content\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.666728 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52363f5a-5d4c-406b-bd57-cbde5f393c2c-utilities\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.666747 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc4ph\" (UniqueName: \"kubernetes.io/projected/52363f5a-5d4c-406b-bd57-cbde5f393c2c-kube-api-access-xc4ph\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.667564 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52363f5a-5d4c-406b-bd57-cbde5f393c2c-utilities\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.667809 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52363f5a-5d4c-406b-bd57-cbde5f393c2c-catalog-content\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.685563 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc4ph\" (UniqueName: \"kubernetes.io/projected/52363f5a-5d4c-406b-bd57-cbde5f393c2c-kube-api-access-xc4ph\") pod \"community-operators-rcbw9\" (UID: \"52363f5a-5d4c-406b-bd57-cbde5f393c2c\") " pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.717299 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:48 crc kubenswrapper[4580]: I0112 13:11:48.905657 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89lkz"] Jan 12 13:11:49 crc kubenswrapper[4580]: I0112 13:11:49.071794 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcbw9"] Jan 12 13:11:49 crc kubenswrapper[4580]: W0112 13:11:49.077533 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52363f5a_5d4c_406b_bd57_cbde5f393c2c.slice/crio-d5566b86a14b8ce2bbb4e9d528a09ca3b6e906d53302e3750bead70b20a0e551 WatchSource:0}: Error finding container d5566b86a14b8ce2bbb4e9d528a09ca3b6e906d53302e3750bead70b20a0e551: Status 404 returned error can't find the container with id d5566b86a14b8ce2bbb4e9d528a09ca3b6e906d53302e3750bead70b20a0e551 Jan 12 13:11:49 crc kubenswrapper[4580]: I0112 13:11:49.619212 4580 generic.go:334] "Generic (PLEG): container finished" podID="45d72a58-4072-4c37-95c8-b4668060c64c" containerID="3de2a67a2a50c3e0de3ba46777198637b2094ddeb87546a6afd8ae66cd3f1375" exitCode=0 Jan 12 13:11:49 crc kubenswrapper[4580]: I0112 13:11:49.619308 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89lkz" event={"ID":"45d72a58-4072-4c37-95c8-b4668060c64c","Type":"ContainerDied","Data":"3de2a67a2a50c3e0de3ba46777198637b2094ddeb87546a6afd8ae66cd3f1375"} Jan 12 13:11:49 crc kubenswrapper[4580]: I0112 13:11:49.619349 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89lkz" event={"ID":"45d72a58-4072-4c37-95c8-b4668060c64c","Type":"ContainerStarted","Data":"173e08a83c603c19b6eddf9f980a6d85537df351a311e0ceef3f6ac705f060a2"} Jan 12 13:11:49 crc kubenswrapper[4580]: I0112 13:11:49.625276 4580 generic.go:334] "Generic (PLEG): container finished" podID="52363f5a-5d4c-406b-bd57-cbde5f393c2c" containerID="de7519fe18a54e17af885a49a2a4fb79691a640b99e8ad6a162170eeaf828431" exitCode=0 Jan 12 13:11:49 crc kubenswrapper[4580]: I0112 13:11:49.625347 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbw9" event={"ID":"52363f5a-5d4c-406b-bd57-cbde5f393c2c","Type":"ContainerDied","Data":"de7519fe18a54e17af885a49a2a4fb79691a640b99e8ad6a162170eeaf828431"} Jan 12 13:11:49 crc kubenswrapper[4580]: I0112 13:11:49.625530 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbw9" event={"ID":"52363f5a-5d4c-406b-bd57-cbde5f393c2c","Type":"ContainerStarted","Data":"d5566b86a14b8ce2bbb4e9d528a09ca3b6e906d53302e3750bead70b20a0e551"} Jan 12 13:11:50 crc kubenswrapper[4580]: I0112 13:11:50.632446 4580 generic.go:334] "Generic (PLEG): container finished" podID="45d72a58-4072-4c37-95c8-b4668060c64c" containerID="822049f407ba06d0df896e75cd4004b5054c7fe1afe241123be1a0bc39ecc344" exitCode=0 Jan 12 13:11:50 crc kubenswrapper[4580]: I0112 13:11:50.632514 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89lkz" event={"ID":"45d72a58-4072-4c37-95c8-b4668060c64c","Type":"ContainerDied","Data":"822049f407ba06d0df896e75cd4004b5054c7fe1afe241123be1a0bc39ecc344"} Jan 12 13:11:51 crc kubenswrapper[4580]: I0112 13:11:51.638974 4580 generic.go:334] "Generic (PLEG): container finished" podID="52363f5a-5d4c-406b-bd57-cbde5f393c2c" containerID="a0b42c0b85ba753924c9ef80c147b2793ed97f656c653c47e4a1aa725afdcf19" exitCode=0 Jan 12 13:11:51 crc kubenswrapper[4580]: I0112 13:11:51.639029 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbw9" event={"ID":"52363f5a-5d4c-406b-bd57-cbde5f393c2c","Type":"ContainerDied","Data":"a0b42c0b85ba753924c9ef80c147b2793ed97f656c653c47e4a1aa725afdcf19"} Jan 12 13:11:51 crc kubenswrapper[4580]: I0112 13:11:51.650347 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89lkz" event={"ID":"45d72a58-4072-4c37-95c8-b4668060c64c","Type":"ContainerStarted","Data":"84df15359fe99d455468e8451feb7e62808fb49b4796131299a84d0bf351fbba"} Jan 12 13:11:51 crc kubenswrapper[4580]: I0112 13:11:51.679583 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-89lkz" podStartSLOduration=2.084602727 podStartE2EDuration="3.67956534s" podCreationTimestamp="2026-01-12 13:11:48 +0000 UTC" firstStartedPulling="2026-01-12 13:11:49.622775899 +0000 UTC m=+308.666994589" lastFinishedPulling="2026-01-12 13:11:51.217738513 +0000 UTC m=+310.261957202" observedRunningTime="2026-01-12 13:11:51.67873053 +0000 UTC m=+310.722949220" watchObservedRunningTime="2026-01-12 13:11:51.67956534 +0000 UTC m=+310.723784029" Jan 12 13:11:52 crc kubenswrapper[4580]: I0112 13:11:52.659064 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbw9" event={"ID":"52363f5a-5d4c-406b-bd57-cbde5f393c2c","Type":"ContainerStarted","Data":"e09d59b6f72e311b0e77242dc6642beafe841a4f2235521d13f107321185d4f7"} Jan 12 13:11:58 crc kubenswrapper[4580]: I0112 13:11:58.525372 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:58 crc kubenswrapper[4580]: I0112 13:11:58.526059 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:58 crc kubenswrapper[4580]: I0112 13:11:58.567875 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:58 crc kubenswrapper[4580]: I0112 13:11:58.587675 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rcbw9" podStartSLOduration=8.013787702 podStartE2EDuration="10.587664943s" podCreationTimestamp="2026-01-12 13:11:48 +0000 UTC" firstStartedPulling="2026-01-12 13:11:49.626803332 +0000 UTC m=+308.671022022" lastFinishedPulling="2026-01-12 13:11:52.200680574 +0000 UTC m=+311.244899263" observedRunningTime="2026-01-12 13:11:52.67473366 +0000 UTC m=+311.718952350" watchObservedRunningTime="2026-01-12 13:11:58.587664943 +0000 UTC m=+317.631883633" Jan 12 13:11:58 crc kubenswrapper[4580]: I0112 13:11:58.718151 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:58 crc kubenswrapper[4580]: I0112 13:11:58.718202 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:58 crc kubenswrapper[4580]: I0112 13:11:58.743958 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-89lkz" Jan 12 13:11:58 crc kubenswrapper[4580]: I0112 13:11:58.752982 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:11:59 crc kubenswrapper[4580]: I0112 13:11:59.735867 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rcbw9" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.138575 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-csq87"] Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.139625 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.146642 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-csq87"] Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.327654 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-ca-trust-extracted\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.327705 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-bound-sa-token\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.327724 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-installation-pull-secrets\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.327760 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-registry-certificates\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.327825 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.327973 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-trusted-ca\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.328265 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-registry-tls\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.328362 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmphz\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-kube-api-access-nmphz\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.344665 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.429242 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-registry-tls\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.429288 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmphz\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-kube-api-access-nmphz\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.429337 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-ca-trust-extracted\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.429362 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-bound-sa-token\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.429382 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-installation-pull-secrets\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.429758 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-ca-trust-extracted\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.429416 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-registry-certificates\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.429902 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-trusted-ca\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.430668 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-registry-certificates\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.430918 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-trusted-ca\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.434680 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-installation-pull-secrets\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.434717 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-registry-tls\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.442656 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-bound-sa-token\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.442891 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmphz\" (UniqueName: \"kubernetes.io/projected/7b9bdb6d-bd22-4249-969c-06c8ef0c9a84-kube-api-access-nmphz\") pod \"image-registry-66df7c8f76-csq87\" (UID: \"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84\") " pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.452513 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.601171 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-csq87"] Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.899916 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-csq87" event={"ID":"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84","Type":"ContainerStarted","Data":"c1349cf9e9354d34f4f2297cd565c438f46651b08f8022a5131ab545fd78a58d"} Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.900294 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-csq87" event={"ID":"7b9bdb6d-bd22-4249-969c-06c8ef0c9a84","Type":"ContainerStarted","Data":"958294582ead479cfbb1aa441f2b353093b6a1908a2e5aaec900963562f4a7a0"} Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.900315 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:38 crc kubenswrapper[4580]: I0112 13:12:38.916353 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-csq87" podStartSLOduration=0.916334743 podStartE2EDuration="916.334743ms" podCreationTimestamp="2026-01-12 13:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:12:38.913878431 +0000 UTC m=+357.958097121" watchObservedRunningTime="2026-01-12 13:12:38.916334743 +0000 UTC m=+357.960553433" Jan 12 13:12:46 crc kubenswrapper[4580]: I0112 13:12:46.949672 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:12:46 crc kubenswrapper[4580]: I0112 13:12:46.951296 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:12:58 crc kubenswrapper[4580]: I0112 13:12:58.458481 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-csq87" Jan 12 13:12:58 crc kubenswrapper[4580]: I0112 13:12:58.500607 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxkcl"] Jan 12 13:13:16 crc kubenswrapper[4580]: I0112 13:13:16.949352 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:13:16 crc kubenswrapper[4580]: I0112 13:13:16.950048 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.532241 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" podUID="cd2ced26-b320-44a3-aa98-457376b3d8c8" containerName="registry" containerID="cri-o://090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0" gracePeriod=30 Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.843826 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.957371 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd2ced26-b320-44a3-aa98-457376b3d8c8-installation-pull-secrets\") pod \"cd2ced26-b320-44a3-aa98-457376b3d8c8\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.957688 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-bound-sa-token\") pod \"cd2ced26-b320-44a3-aa98-457376b3d8c8\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.957764 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf7dd\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-kube-api-access-hf7dd\") pod \"cd2ced26-b320-44a3-aa98-457376b3d8c8\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.957790 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-certificates\") pod \"cd2ced26-b320-44a3-aa98-457376b3d8c8\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.957844 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-trusted-ca\") pod \"cd2ced26-b320-44a3-aa98-457376b3d8c8\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.957956 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cd2ced26-b320-44a3-aa98-457376b3d8c8\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.958007 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls\") pod \"cd2ced26-b320-44a3-aa98-457376b3d8c8\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.958033 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd2ced26-b320-44a3-aa98-457376b3d8c8-ca-trust-extracted\") pod \"cd2ced26-b320-44a3-aa98-457376b3d8c8\" (UID: \"cd2ced26-b320-44a3-aa98-457376b3d8c8\") " Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.958759 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cd2ced26-b320-44a3-aa98-457376b3d8c8" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.958933 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cd2ced26-b320-44a3-aa98-457376b3d8c8" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.962608 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2ced26-b320-44a3-aa98-457376b3d8c8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cd2ced26-b320-44a3-aa98-457376b3d8c8" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.962696 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cd2ced26-b320-44a3-aa98-457376b3d8c8" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.962948 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cd2ced26-b320-44a3-aa98-457376b3d8c8" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.963212 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-kube-api-access-hf7dd" (OuterVolumeSpecName: "kube-api-access-hf7dd") pod "cd2ced26-b320-44a3-aa98-457376b3d8c8" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8"). InnerVolumeSpecName "kube-api-access-hf7dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.966590 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cd2ced26-b320-44a3-aa98-457376b3d8c8" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 12 13:13:23 crc kubenswrapper[4580]: I0112 13:13:23.974434 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd2ced26-b320-44a3-aa98-457376b3d8c8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cd2ced26-b320-44a3-aa98-457376b3d8c8" (UID: "cd2ced26-b320-44a3-aa98-457376b3d8c8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.060199 4580 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.060233 4580 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd2ced26-b320-44a3-aa98-457376b3d8c8-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.060248 4580 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd2ced26-b320-44a3-aa98-457376b3d8c8-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.060264 4580 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.060279 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf7dd\" (UniqueName: \"kubernetes.io/projected/cd2ced26-b320-44a3-aa98-457376b3d8c8-kube-api-access-hf7dd\") on node \"crc\" DevicePath \"\"" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.060291 4580 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.060307 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd2ced26-b320-44a3-aa98-457376b3d8c8-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.147892 4580 generic.go:334] "Generic (PLEG): container finished" podID="cd2ced26-b320-44a3-aa98-457376b3d8c8" containerID="090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0" exitCode=0 Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.147954 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" event={"ID":"cd2ced26-b320-44a3-aa98-457376b3d8c8","Type":"ContainerDied","Data":"090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0"} Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.147968 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.148004 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" event={"ID":"cd2ced26-b320-44a3-aa98-457376b3d8c8","Type":"ContainerDied","Data":"3f4384b8b149ea29b9f07d99c24a77c90f4e8950807fc0b58fbdef509d95a1df"} Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.148030 4580 scope.go:117] "RemoveContainer" containerID="090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.170959 4580 scope.go:117] "RemoveContainer" containerID="090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0" Jan 12 13:13:24 crc kubenswrapper[4580]: E0112 13:13:24.171447 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0\": container with ID starting with 090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0 not found: ID does not exist" containerID="090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.171489 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0"} err="failed to get container status \"090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0\": rpc error: code = NotFound desc = could not find container \"090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0\": container with ID starting with 090039f9faae895dde1c73db47570c9ca780dcb20aae0ee6df9d21fa42910af0 not found: ID does not exist" Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.178750 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxkcl"] Jan 12 13:13:24 crc kubenswrapper[4580]: I0112 13:13:24.182750 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hxkcl"] Jan 12 13:13:25 crc kubenswrapper[4580]: I0112 13:13:25.287843 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd2ced26-b320-44a3-aa98-457376b3d8c8" path="/var/lib/kubelet/pods/cd2ced26-b320-44a3-aa98-457376b3d8c8/volumes" Jan 12 13:13:28 crc kubenswrapper[4580]: I0112 13:13:28.820746 4580 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-hxkcl container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.10:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 12 13:13:28 crc kubenswrapper[4580]: I0112 13:13:28.821158 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-hxkcl" podUID="cd2ced26-b320-44a3-aa98-457376b3d8c8" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.10:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 12 13:13:41 crc kubenswrapper[4580]: I0112 13:13:41.432969 4580 scope.go:117] "RemoveContainer" containerID="9eeac0b697ceba82e51d043f12dcf4c6f0028990416b1ee40c5181232d962192" Jan 12 13:13:41 crc kubenswrapper[4580]: I0112 13:13:41.449205 4580 scope.go:117] "RemoveContainer" containerID="d3c620e4b41d6183e427d9b95acc0e6e20f24998d210c706d93d0e8b08def41b" Jan 12 13:13:41 crc kubenswrapper[4580]: I0112 13:13:41.457969 4580 scope.go:117] "RemoveContainer" containerID="80ca0769a1431fd4c134322feb11db7e54dd85e8f6b18a0ea43da48fe9b05005" Jan 12 13:13:41 crc kubenswrapper[4580]: I0112 13:13:41.469856 4580 scope.go:117] "RemoveContainer" containerID="05c5ad3ad752dde0d33f89e89540f22790aa2905185c704d407fe605655c8e28" Jan 12 13:13:41 crc kubenswrapper[4580]: I0112 13:13:41.485079 4580 scope.go:117] "RemoveContainer" containerID="e2262814ad3b77a7aecef6dc39226a540c7d7839576606e11c4765c858e81834" Jan 12 13:13:41 crc kubenswrapper[4580]: I0112 13:13:41.502278 4580 scope.go:117] "RemoveContainer" containerID="0a083c6f95d2564159d73396bad6a96aee45aed4d495020b3b54f220a9fd4e23" Jan 12 13:13:46 crc kubenswrapper[4580]: I0112 13:13:46.949056 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:13:46 crc kubenswrapper[4580]: I0112 13:13:46.949484 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:13:46 crc kubenswrapper[4580]: I0112 13:13:46.949546 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:13:46 crc kubenswrapper[4580]: I0112 13:13:46.950091 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1689fbe54ea63924eb5436687ff3624dfc8f05694ffc76352754b1bc5a4e1401"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:13:46 crc kubenswrapper[4580]: I0112 13:13:46.950173 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://1689fbe54ea63924eb5436687ff3624dfc8f05694ffc76352754b1bc5a4e1401" gracePeriod=600 Jan 12 13:13:47 crc kubenswrapper[4580]: I0112 13:13:47.274390 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="1689fbe54ea63924eb5436687ff3624dfc8f05694ffc76352754b1bc5a4e1401" exitCode=0 Jan 12 13:13:47 crc kubenswrapper[4580]: I0112 13:13:47.274464 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"1689fbe54ea63924eb5436687ff3624dfc8f05694ffc76352754b1bc5a4e1401"} Jan 12 13:13:47 crc kubenswrapper[4580]: I0112 13:13:47.274876 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"4e7364093541422d6527d483a8a4570a7b048dfd23774d35d5dc7c8fcdefe657"} Jan 12 13:13:47 crc kubenswrapper[4580]: I0112 13:13:47.274916 4580 scope.go:117] "RemoveContainer" containerID="60b7e67369583f18d56633483204d326449c0f7456afe4b4fd1e7134eff438cb" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.155368 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22"] Jan 12 13:15:00 crc kubenswrapper[4580]: E0112 13:15:00.156892 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd2ced26-b320-44a3-aa98-457376b3d8c8" containerName="registry" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.156918 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd2ced26-b320-44a3-aa98-457376b3d8c8" containerName="registry" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.157028 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd2ced26-b320-44a3-aa98-457376b3d8c8" containerName="registry" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.157579 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.159028 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.159704 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.164630 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22"] Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.253776 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04066069-62ab-4458-8b6b-620f8bc9ed91-secret-volume\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.254005 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04066069-62ab-4458-8b6b-620f8bc9ed91-config-volume\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.254221 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb8ml\" (UniqueName: \"kubernetes.io/projected/04066069-62ab-4458-8b6b-620f8bc9ed91-kube-api-access-cb8ml\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.355736 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb8ml\" (UniqueName: \"kubernetes.io/projected/04066069-62ab-4458-8b6b-620f8bc9ed91-kube-api-access-cb8ml\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.355794 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04066069-62ab-4458-8b6b-620f8bc9ed91-secret-volume\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.355816 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04066069-62ab-4458-8b6b-620f8bc9ed91-config-volume\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.356703 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04066069-62ab-4458-8b6b-620f8bc9ed91-config-volume\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.363146 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04066069-62ab-4458-8b6b-620f8bc9ed91-secret-volume\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.370807 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb8ml\" (UniqueName: \"kubernetes.io/projected/04066069-62ab-4458-8b6b-620f8bc9ed91-kube-api-access-cb8ml\") pod \"collect-profiles-29470395-cvc22\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.471942 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:00 crc kubenswrapper[4580]: I0112 13:15:00.621510 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22"] Jan 12 13:15:00 crc kubenswrapper[4580]: W0112 13:15:00.628700 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04066069_62ab_4458_8b6b_620f8bc9ed91.slice/crio-4870ee6351ccc6038aafe28c28371476a9db6d4f21bb81f2cc04ab809ede7b68 WatchSource:0}: Error finding container 4870ee6351ccc6038aafe28c28371476a9db6d4f21bb81f2cc04ab809ede7b68: Status 404 returned error can't find the container with id 4870ee6351ccc6038aafe28c28371476a9db6d4f21bb81f2cc04ab809ede7b68 Jan 12 13:15:01 crc kubenswrapper[4580]: I0112 13:15:01.639301 4580 generic.go:334] "Generic (PLEG): container finished" podID="04066069-62ab-4458-8b6b-620f8bc9ed91" containerID="697d5d73b37c80dfb37525a935c1e98c6e2f2bbf20a5b0877c67a632a077406d" exitCode=0 Jan 12 13:15:01 crc kubenswrapper[4580]: I0112 13:15:01.639374 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" event={"ID":"04066069-62ab-4458-8b6b-620f8bc9ed91","Type":"ContainerDied","Data":"697d5d73b37c80dfb37525a935c1e98c6e2f2bbf20a5b0877c67a632a077406d"} Jan 12 13:15:01 crc kubenswrapper[4580]: I0112 13:15:01.639613 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" event={"ID":"04066069-62ab-4458-8b6b-620f8bc9ed91","Type":"ContainerStarted","Data":"4870ee6351ccc6038aafe28c28371476a9db6d4f21bb81f2cc04ab809ede7b68"} Jan 12 13:15:02 crc kubenswrapper[4580]: I0112 13:15:02.814650 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:02 crc kubenswrapper[4580]: I0112 13:15:02.986249 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb8ml\" (UniqueName: \"kubernetes.io/projected/04066069-62ab-4458-8b6b-620f8bc9ed91-kube-api-access-cb8ml\") pod \"04066069-62ab-4458-8b6b-620f8bc9ed91\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " Jan 12 13:15:02 crc kubenswrapper[4580]: I0112 13:15:02.986319 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04066069-62ab-4458-8b6b-620f8bc9ed91-config-volume\") pod \"04066069-62ab-4458-8b6b-620f8bc9ed91\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " Jan 12 13:15:02 crc kubenswrapper[4580]: I0112 13:15:02.986359 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04066069-62ab-4458-8b6b-620f8bc9ed91-secret-volume\") pod \"04066069-62ab-4458-8b6b-620f8bc9ed91\" (UID: \"04066069-62ab-4458-8b6b-620f8bc9ed91\") " Jan 12 13:15:02 crc kubenswrapper[4580]: I0112 13:15:02.987066 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04066069-62ab-4458-8b6b-620f8bc9ed91-config-volume" (OuterVolumeSpecName: "config-volume") pod "04066069-62ab-4458-8b6b-620f8bc9ed91" (UID: "04066069-62ab-4458-8b6b-620f8bc9ed91"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:15:02 crc kubenswrapper[4580]: I0112 13:15:02.991434 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04066069-62ab-4458-8b6b-620f8bc9ed91-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04066069-62ab-4458-8b6b-620f8bc9ed91" (UID: "04066069-62ab-4458-8b6b-620f8bc9ed91"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:15:02 crc kubenswrapper[4580]: I0112 13:15:02.991975 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04066069-62ab-4458-8b6b-620f8bc9ed91-kube-api-access-cb8ml" (OuterVolumeSpecName: "kube-api-access-cb8ml") pod "04066069-62ab-4458-8b6b-620f8bc9ed91" (UID: "04066069-62ab-4458-8b6b-620f8bc9ed91"). InnerVolumeSpecName "kube-api-access-cb8ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:15:03 crc kubenswrapper[4580]: I0112 13:15:03.088476 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb8ml\" (UniqueName: \"kubernetes.io/projected/04066069-62ab-4458-8b6b-620f8bc9ed91-kube-api-access-cb8ml\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:03 crc kubenswrapper[4580]: I0112 13:15:03.088527 4580 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04066069-62ab-4458-8b6b-620f8bc9ed91-config-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:03 crc kubenswrapper[4580]: I0112 13:15:03.088539 4580 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04066069-62ab-4458-8b6b-620f8bc9ed91-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:03 crc kubenswrapper[4580]: I0112 13:15:03.651529 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" event={"ID":"04066069-62ab-4458-8b6b-620f8bc9ed91","Type":"ContainerDied","Data":"4870ee6351ccc6038aafe28c28371476a9db6d4f21bb81f2cc04ab809ede7b68"} Jan 12 13:15:03 crc kubenswrapper[4580]: I0112 13:15:03.651593 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22" Jan 12 13:15:03 crc kubenswrapper[4580]: I0112 13:15:03.651602 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4870ee6351ccc6038aafe28c28371476a9db6d4f21bb81f2cc04ab809ede7b68" Jan 12 13:15:14 crc kubenswrapper[4580]: I0112 13:15:14.998205 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-c9fsw"] Jan 12 13:15:14 crc kubenswrapper[4580]: E0112 13:15:14.998737 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04066069-62ab-4458-8b6b-620f8bc9ed91" containerName="collect-profiles" Jan 12 13:15:14 crc kubenswrapper[4580]: I0112 13:15:14.998752 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="04066069-62ab-4458-8b6b-620f8bc9ed91" containerName="collect-profiles" Jan 12 13:15:14 crc kubenswrapper[4580]: I0112 13:15:14.998857 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="04066069-62ab-4458-8b6b-620f8bc9ed91" containerName="collect-profiles" Jan 12 13:15:14 crc kubenswrapper[4580]: I0112 13:15:14.999300 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-c9fsw" Jan 12 13:15:14 crc kubenswrapper[4580]: I0112 13:15:14.999440 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-56nml"] Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.000452 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-56nml" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.001240 4580 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2bgtk" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.002283 4580 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7h5m5" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.002497 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.007089 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.009298 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dkts4"] Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.009825 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.010966 4580 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vvcmf" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.014384 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-56nml"] Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.017816 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dkts4"] Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.043233 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-c9fsw"] Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.140717 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz89z\" (UniqueName: \"kubernetes.io/projected/5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5-kube-api-access-hz89z\") pod \"cert-manager-cainjector-cf98fcc89-56nml\" (UID: \"5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-56nml" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.140770 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7ml6\" (UniqueName: \"kubernetes.io/projected/56ef0925-27e4-4a8f-9a56-3e31c7176270-kube-api-access-j7ml6\") pod \"cert-manager-webhook-687f57d79b-dkts4\" (UID: \"56ef0925-27e4-4a8f-9a56-3e31c7176270\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.140852 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8dc\" (UniqueName: \"kubernetes.io/projected/e8e0f177-af2a-4975-a047-6d66bcd9b474-kube-api-access-8z8dc\") pod \"cert-manager-858654f9db-c9fsw\" (UID: \"e8e0f177-af2a-4975-a047-6d66bcd9b474\") " pod="cert-manager/cert-manager-858654f9db-c9fsw" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.242425 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz89z\" (UniqueName: \"kubernetes.io/projected/5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5-kube-api-access-hz89z\") pod \"cert-manager-cainjector-cf98fcc89-56nml\" (UID: \"5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-56nml" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.242487 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7ml6\" (UniqueName: \"kubernetes.io/projected/56ef0925-27e4-4a8f-9a56-3e31c7176270-kube-api-access-j7ml6\") pod \"cert-manager-webhook-687f57d79b-dkts4\" (UID: \"56ef0925-27e4-4a8f-9a56-3e31c7176270\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.242528 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8dc\" (UniqueName: \"kubernetes.io/projected/e8e0f177-af2a-4975-a047-6d66bcd9b474-kube-api-access-8z8dc\") pod \"cert-manager-858654f9db-c9fsw\" (UID: \"e8e0f177-af2a-4975-a047-6d66bcd9b474\") " pod="cert-manager/cert-manager-858654f9db-c9fsw" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.260537 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7ml6\" (UniqueName: \"kubernetes.io/projected/56ef0925-27e4-4a8f-9a56-3e31c7176270-kube-api-access-j7ml6\") pod \"cert-manager-webhook-687f57d79b-dkts4\" (UID: \"56ef0925-27e4-4a8f-9a56-3e31c7176270\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.260733 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz89z\" (UniqueName: \"kubernetes.io/projected/5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5-kube-api-access-hz89z\") pod \"cert-manager-cainjector-cf98fcc89-56nml\" (UID: \"5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-56nml" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.260842 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8dc\" (UniqueName: \"kubernetes.io/projected/e8e0f177-af2a-4975-a047-6d66bcd9b474-kube-api-access-8z8dc\") pod \"cert-manager-858654f9db-c9fsw\" (UID: \"e8e0f177-af2a-4975-a047-6d66bcd9b474\") " pod="cert-manager/cert-manager-858654f9db-c9fsw" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.315014 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-c9fsw" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.328119 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-56nml" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.336749 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.497982 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-c9fsw"] Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.502874 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.717773 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-c9fsw" event={"ID":"e8e0f177-af2a-4975-a047-6d66bcd9b474","Type":"ContainerStarted","Data":"8366f2fa440542352f4bea480e07ef448b5333aaf85dde3c973d6a072285afea"} Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.744931 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-56nml"] Jan 12 13:15:15 crc kubenswrapper[4580]: I0112 13:15:15.749890 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dkts4"] Jan 12 13:15:15 crc kubenswrapper[4580]: W0112 13:15:15.750046 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ea31dc0_a9ca_4c74_b2aa_7999ef2b94f5.slice/crio-547d9084778284f88d54b118153f35ec25c16406c351cd65522fe4cb74d01e19 WatchSource:0}: Error finding container 547d9084778284f88d54b118153f35ec25c16406c351cd65522fe4cb74d01e19: Status 404 returned error can't find the container with id 547d9084778284f88d54b118153f35ec25c16406c351cd65522fe4cb74d01e19 Jan 12 13:15:15 crc kubenswrapper[4580]: W0112 13:15:15.751920 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56ef0925_27e4_4a8f_9a56_3e31c7176270.slice/crio-1717824003f7b03c5014beb68092773e9303e6884c3f392788e15f16ee039164 WatchSource:0}: Error finding container 1717824003f7b03c5014beb68092773e9303e6884c3f392788e15f16ee039164: Status 404 returned error can't find the container with id 1717824003f7b03c5014beb68092773e9303e6884c3f392788e15f16ee039164 Jan 12 13:15:16 crc kubenswrapper[4580]: I0112 13:15:16.729344 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-56nml" event={"ID":"5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5","Type":"ContainerStarted","Data":"547d9084778284f88d54b118153f35ec25c16406c351cd65522fe4cb74d01e19"} Jan 12 13:15:16 crc kubenswrapper[4580]: I0112 13:15:16.730590 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" event={"ID":"56ef0925-27e4-4a8f-9a56-3e31c7176270","Type":"ContainerStarted","Data":"1717824003f7b03c5014beb68092773e9303e6884c3f392788e15f16ee039164"} Jan 12 13:15:18 crc kubenswrapper[4580]: I0112 13:15:18.746566 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-c9fsw" event={"ID":"e8e0f177-af2a-4975-a047-6d66bcd9b474","Type":"ContainerStarted","Data":"5b7a03fc3ead6f15fa9d535eaa39a65f56a022d21a7c84d70382b5a191cf4d8e"} Jan 12 13:15:18 crc kubenswrapper[4580]: I0112 13:15:18.767191 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-c9fsw" podStartSLOduration=2.441756823 podStartE2EDuration="4.767145945s" podCreationTimestamp="2026-01-12 13:15:14 +0000 UTC" firstStartedPulling="2026-01-12 13:15:15.50266734 +0000 UTC m=+514.546886030" lastFinishedPulling="2026-01-12 13:15:17.828056462 +0000 UTC m=+516.872275152" observedRunningTime="2026-01-12 13:15:18.760993267 +0000 UTC m=+517.805211957" watchObservedRunningTime="2026-01-12 13:15:18.767145945 +0000 UTC m=+517.811364635" Jan 12 13:15:19 crc kubenswrapper[4580]: I0112 13:15:19.754646 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-56nml" event={"ID":"5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5","Type":"ContainerStarted","Data":"8b9f2cbbaf44739c823ec2a019311f669c9944bfa9d0a9221fa2c714c38f9d1b"} Jan 12 13:15:19 crc kubenswrapper[4580]: I0112 13:15:19.757680 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" event={"ID":"56ef0925-27e4-4a8f-9a56-3e31c7176270","Type":"ContainerStarted","Data":"5eb49b3b605d7e228fdabb12d392314c8a108d1a337bca0daacdb878c7997dfd"} Jan 12 13:15:19 crc kubenswrapper[4580]: I0112 13:15:19.757725 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" Jan 12 13:15:19 crc kubenswrapper[4580]: I0112 13:15:19.768706 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-56nml" podStartSLOduration=2.417796054 podStartE2EDuration="5.76869433s" podCreationTimestamp="2026-01-12 13:15:14 +0000 UTC" firstStartedPulling="2026-01-12 13:15:15.752378073 +0000 UTC m=+514.796596763" lastFinishedPulling="2026-01-12 13:15:19.103276348 +0000 UTC m=+518.147495039" observedRunningTime="2026-01-12 13:15:19.766530608 +0000 UTC m=+518.810749287" watchObservedRunningTime="2026-01-12 13:15:19.76869433 +0000 UTC m=+518.812913020" Jan 12 13:15:19 crc kubenswrapper[4580]: I0112 13:15:19.787395 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" podStartSLOduration=2.409519549 podStartE2EDuration="5.787349804s" podCreationTimestamp="2026-01-12 13:15:14 +0000 UTC" firstStartedPulling="2026-01-12 13:15:15.754459229 +0000 UTC m=+514.798677919" lastFinishedPulling="2026-01-12 13:15:19.132289484 +0000 UTC m=+518.176508174" observedRunningTime="2026-01-12 13:15:19.785465578 +0000 UTC m=+518.829684268" watchObservedRunningTime="2026-01-12 13:15:19.787349804 +0000 UTC m=+518.831568495" Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.340467 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dkts4" Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.856154 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hn77p"] Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.856499 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovn-controller" containerID="cri-o://18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a" gracePeriod=30 Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.856608 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="sbdb" containerID="cri-o://00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5" gracePeriod=30 Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.856583 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="nbdb" containerID="cri-o://fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685" gracePeriod=30 Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.856702 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db" gracePeriod=30 Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.856640 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovn-acl-logging" containerID="cri-o://34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187" gracePeriod=30 Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.856763 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="northd" containerID="cri-o://381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a" gracePeriod=30 Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.856830 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kube-rbac-proxy-node" containerID="cri-o://4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a" gracePeriod=30 Jan 12 13:15:25 crc kubenswrapper[4580]: I0112 13:15:25.888312 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" containerID="cri-o://06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06" gracePeriod=30 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.130415 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/3.log" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.133039 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovn-acl-logging/0.log" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.133674 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovn-controller/0.log" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.134125 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.183918 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9pb6d"] Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184361 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184390 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184407 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184416 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184426 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovn-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184434 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovn-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184444 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kube-rbac-proxy-node" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184451 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kube-rbac-proxy-node" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184460 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kube-rbac-proxy-ovn-metrics" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184469 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kube-rbac-proxy-ovn-metrics" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184479 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184486 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184493 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184499 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184510 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="northd" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184516 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="northd" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184526 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="nbdb" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184533 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="nbdb" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184547 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovn-acl-logging" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184554 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovn-acl-logging" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184566 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kubecfg-setup" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184573 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kubecfg-setup" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184585 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="sbdb" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184593 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="sbdb" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184747 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kube-rbac-proxy-ovn-metrics" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184764 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184771 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="northd" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184780 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184787 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovn-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184793 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="kube-rbac-proxy-node" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184802 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovn-acl-logging" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184812 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="sbdb" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184820 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184827 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="nbdb" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.184960 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.184969 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.185138 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.185382 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerName="ovnkube-controller" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.187554 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278593 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-var-lib-openvswitch\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278680 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovn-node-metrics-cert\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278714 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-etc-openvswitch\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278712 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278742 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-systemd-units\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278777 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-kubelet\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278810 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-netns\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278829 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-systemd\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278887 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-config\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278879 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278916 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-script-lib\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278947 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-env-overrides\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278959 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278975 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-ovn\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279065 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-ovn-kubernetes\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279117 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-netd\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279169 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-node-log\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279224 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4wmg\" (UniqueName: \"kubernetes.io/projected/fd4e0810-eddb-47f5-a7dc-beed7b545112-kube-api-access-k4wmg\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279287 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-slash\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279307 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279350 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-bin\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279400 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-openvswitch\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279436 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-log-socket\") pod \"fd4e0810-eddb-47f5-a7dc-beed7b545112\" (UID: \"fd4e0810-eddb-47f5-a7dc-beed7b545112\") " Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279769 4580 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279805 4580 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279817 4580 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.278986 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279926 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279003 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279959 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-node-log" (OuterVolumeSpecName: "node-log") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279919 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-slash" (OuterVolumeSpecName: "host-slash") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279979 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279405 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.280012 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279439 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279679 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279880 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279885 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-log-socket" (OuterVolumeSpecName: "log-socket") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279900 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.279951 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.285951 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.286249 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4e0810-eddb-47f5-a7dc-beed7b545112-kube-api-access-k4wmg" (OuterVolumeSpecName: "kube-api-access-k4wmg") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "kube-api-access-k4wmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.293074 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fd4e0810-eddb-47f5-a7dc-beed7b545112" (UID: "fd4e0810-eddb-47f5-a7dc-beed7b545112"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380570 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-cni-bin\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380627 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380730 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380772 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-slash\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380794 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-etc-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380834 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-kubelet\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380862 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-ovnkube-config\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380882 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-run-netns\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380937 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-env-overrides\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.380971 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-systemd-units\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381012 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-systemd\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381061 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-var-lib-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381078 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscxv\" (UniqueName: \"kubernetes.io/projected/e281c805-e75c-43ce-a686-638f1b681c9a-kube-api-access-vscxv\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381096 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-log-socket\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381124 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-node-log\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381190 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-cni-netd\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381217 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-ovn\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381238 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381336 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-ovnkube-script-lib\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381362 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e281c805-e75c-43ce-a686-638f1b681c9a-ovn-node-metrics-cert\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381410 4580 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381426 4580 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381436 4580 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381444 4580 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381454 4580 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381463 4580 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381471 4580 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd4e0810-eddb-47f5-a7dc-beed7b545112-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381479 4580 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381488 4580 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381496 4580 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381504 4580 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-node-log\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381514 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4wmg\" (UniqueName: \"kubernetes.io/projected/fd4e0810-eddb-47f5-a7dc-beed7b545112-kube-api-access-k4wmg\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381523 4580 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-slash\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381533 4580 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381541 4580 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381550 4580 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.381563 4580 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd4e0810-eddb-47f5-a7dc-beed7b545112-log-socket\") on node \"crc\" DevicePath \"\"" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482344 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-etc-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482393 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-kubelet\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482426 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-ovnkube-config\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482446 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-run-netns\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482466 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-env-overrides\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482480 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-etc-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482504 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-kubelet\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482487 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-systemd-units\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482538 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-run-netns\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482529 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-systemd-units\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482587 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-systemd\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482633 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-var-lib-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482658 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscxv\" (UniqueName: \"kubernetes.io/projected/e281c805-e75c-43ce-a686-638f1b681c9a-kube-api-access-vscxv\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482664 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-systemd\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482681 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-log-socket\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482697 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-var-lib-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482703 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-node-log\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482730 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-log-socket\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482730 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-cni-netd\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482757 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-cni-netd\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482769 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-ovn\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482788 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-ovn\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482806 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482849 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-ovnkube-script-lib\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482873 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e281c805-e75c-43ce-a686-638f1b681c9a-ovn-node-metrics-cert\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482889 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482896 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-cni-bin\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482881 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-node-log\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482922 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482932 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-cni-bin\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482951 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482974 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-slash\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.482990 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-run-openvswitch\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.483028 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-slash\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.483031 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e281c805-e75c-43ce-a686-638f1b681c9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.483084 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-env-overrides\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.483222 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-ovnkube-config\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.483715 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e281c805-e75c-43ce-a686-638f1b681c9a-ovnkube-script-lib\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.487794 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e281c805-e75c-43ce-a686-638f1b681c9a-ovn-node-metrics-cert\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.496894 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscxv\" (UniqueName: \"kubernetes.io/projected/e281c805-e75c-43ce-a686-638f1b681c9a-kube-api-access-vscxv\") pod \"ovnkube-node-9pb6d\" (UID: \"e281c805-e75c-43ce-a686-638f1b681c9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.501718 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.798720 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/2.log" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.799591 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/1.log" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.799659 4580 generic.go:334] "Generic (PLEG): container finished" podID="c8f39bcc-5a25-4746-988b-2251fd1be8c9" containerID="7e42cabcc8a0320fd9f67cb6f070b5827db98797bcde87f1d01d047fc0ed0086" exitCode=2 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.799769 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nnz5s" event={"ID":"c8f39bcc-5a25-4746-988b-2251fd1be8c9","Type":"ContainerDied","Data":"7e42cabcc8a0320fd9f67cb6f070b5827db98797bcde87f1d01d047fc0ed0086"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.799855 4580 scope.go:117] "RemoveContainer" containerID="2fd8b2f8f716304f83430fe4b505d29fbb68a1a5387205e72c68b65c260c7fc9" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.800934 4580 scope.go:117] "RemoveContainer" containerID="7e42cabcc8a0320fd9f67cb6f070b5827db98797bcde87f1d01d047fc0ed0086" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.801248 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nnz5s_openshift-multus(c8f39bcc-5a25-4746-988b-2251fd1be8c9)\"" pod="openshift-multus/multus-nnz5s" podUID="c8f39bcc-5a25-4746-988b-2251fd1be8c9" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.802747 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovnkube-controller/3.log" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.805340 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovn-acl-logging/0.log" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.805898 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hn77p_fd4e0810-eddb-47f5-a7dc-beed7b545112/ovn-controller/0.log" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.806481 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06" exitCode=0 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.806608 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5" exitCode=0 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.806707 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685" exitCode=0 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.806798 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a" exitCode=0 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.806877 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db" exitCode=0 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807041 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a" exitCode=0 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807137 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187" exitCode=143 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807239 4580 generic.go:334] "Generic (PLEG): container finished" podID="fd4e0810-eddb-47f5-a7dc-beed7b545112" containerID="18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a" exitCode=143 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.806568 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.806524 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807593 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807672 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807733 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807796 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807844 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807906 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.807961 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808000 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808038 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808078 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808149 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808199 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808255 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808298 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808345 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808395 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808446 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808494 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808538 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808585 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808639 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808685 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808730 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808768 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808808 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808848 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808886 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808939 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.808988 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809033 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809076 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809135 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809193 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809245 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809301 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809346 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809400 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809463 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hn77p" event={"ID":"fd4e0810-eddb-47f5-a7dc-beed7b545112","Type":"ContainerDied","Data":"8d0f642878b6350ced97adc24d17817f9f071e1ad17e703a8ea84dfd802b7dd8"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809309 4580 generic.go:334] "Generic (PLEG): container finished" podID="e281c805-e75c-43ce-a686-638f1b681c9a" containerID="3571c40ed9e9d531d7ce71565f7c71d3c5e9331df3a7f7df034ae0933981c98f" exitCode=0 Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809520 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.809646 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810247 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810870 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810895 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810903 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810910 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810957 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810964 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810970 4580 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.810985 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerDied","Data":"3571c40ed9e9d531d7ce71565f7c71d3c5e9331df3a7f7df034ae0933981c98f"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.811041 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"a6d8476379b1e2efcd25261d56c4c033e529acc5f1814b5af2066d1b97f8cbb6"} Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.819680 4580 scope.go:117] "RemoveContainer" containerID="06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.846371 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.862389 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hn77p"] Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.866124 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hn77p"] Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.866392 4580 scope.go:117] "RemoveContainer" containerID="00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.877970 4580 scope.go:117] "RemoveContainer" containerID="fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.888565 4580 scope.go:117] "RemoveContainer" containerID="381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.905411 4580 scope.go:117] "RemoveContainer" containerID="57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.916902 4580 scope.go:117] "RemoveContainer" containerID="4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.932059 4580 scope.go:117] "RemoveContainer" containerID="34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.943036 4580 scope.go:117] "RemoveContainer" containerID="18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.959464 4580 scope.go:117] "RemoveContainer" containerID="8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.974884 4580 scope.go:117] "RemoveContainer" containerID="06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.975343 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": container with ID starting with 06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06 not found: ID does not exist" containerID="06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.975385 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} err="failed to get container status \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": rpc error: code = NotFound desc = could not find container \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": container with ID starting with 06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.975411 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.976563 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": container with ID starting with 20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98 not found: ID does not exist" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.976600 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} err="failed to get container status \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": rpc error: code = NotFound desc = could not find container \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": container with ID starting with 20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.976642 4580 scope.go:117] "RemoveContainer" containerID="00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.977378 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": container with ID starting with 00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5 not found: ID does not exist" containerID="00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.977406 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} err="failed to get container status \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": rpc error: code = NotFound desc = could not find container \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": container with ID starting with 00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.977420 4580 scope.go:117] "RemoveContainer" containerID="fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.978141 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": container with ID starting with fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685 not found: ID does not exist" containerID="fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.978170 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} err="failed to get container status \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": rpc error: code = NotFound desc = could not find container \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": container with ID starting with fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.978184 4580 scope.go:117] "RemoveContainer" containerID="381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.978523 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": container with ID starting with 381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a not found: ID does not exist" containerID="381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.978556 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} err="failed to get container status \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": rpc error: code = NotFound desc = could not find container \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": container with ID starting with 381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.978603 4580 scope.go:117] "RemoveContainer" containerID="57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.979163 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": container with ID starting with 57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db not found: ID does not exist" containerID="57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.979247 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} err="failed to get container status \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": rpc error: code = NotFound desc = could not find container \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": container with ID starting with 57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.979314 4580 scope.go:117] "RemoveContainer" containerID="4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.980161 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": container with ID starting with 4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a not found: ID does not exist" containerID="4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.980418 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} err="failed to get container status \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": rpc error: code = NotFound desc = could not find container \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": container with ID starting with 4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.980436 4580 scope.go:117] "RemoveContainer" containerID="34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.980689 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": container with ID starting with 34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187 not found: ID does not exist" containerID="34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.980712 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} err="failed to get container status \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": rpc error: code = NotFound desc = could not find container \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": container with ID starting with 34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.980727 4580 scope.go:117] "RemoveContainer" containerID="18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.981073 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": container with ID starting with 18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a not found: ID does not exist" containerID="18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.981093 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} err="failed to get container status \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": rpc error: code = NotFound desc = could not find container \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": container with ID starting with 18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.981188 4580 scope.go:117] "RemoveContainer" containerID="8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b" Jan 12 13:15:26 crc kubenswrapper[4580]: E0112 13:15:26.981555 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": container with ID starting with 8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b not found: ID does not exist" containerID="8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.981577 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} err="failed to get container status \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": rpc error: code = NotFound desc = could not find container \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": container with ID starting with 8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.981590 4580 scope.go:117] "RemoveContainer" containerID="06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.981867 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} err="failed to get container status \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": rpc error: code = NotFound desc = could not find container \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": container with ID starting with 06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.981888 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.982235 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} err="failed to get container status \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": rpc error: code = NotFound desc = could not find container \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": container with ID starting with 20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.982270 4580 scope.go:117] "RemoveContainer" containerID="00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.982584 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} err="failed to get container status \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": rpc error: code = NotFound desc = could not find container \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": container with ID starting with 00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.982610 4580 scope.go:117] "RemoveContainer" containerID="fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.983152 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} err="failed to get container status \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": rpc error: code = NotFound desc = could not find container \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": container with ID starting with fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.983197 4580 scope.go:117] "RemoveContainer" containerID="381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.983493 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} err="failed to get container status \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": rpc error: code = NotFound desc = could not find container \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": container with ID starting with 381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.983517 4580 scope.go:117] "RemoveContainer" containerID="57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.983935 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} err="failed to get container status \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": rpc error: code = NotFound desc = could not find container \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": container with ID starting with 57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.983966 4580 scope.go:117] "RemoveContainer" containerID="4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.984242 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} err="failed to get container status \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": rpc error: code = NotFound desc = could not find container \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": container with ID starting with 4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.984261 4580 scope.go:117] "RemoveContainer" containerID="34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.984583 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} err="failed to get container status \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": rpc error: code = NotFound desc = could not find container \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": container with ID starting with 34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.984609 4580 scope.go:117] "RemoveContainer" containerID="18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.984858 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} err="failed to get container status \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": rpc error: code = NotFound desc = could not find container \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": container with ID starting with 18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.984875 4580 scope.go:117] "RemoveContainer" containerID="8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.985133 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} err="failed to get container status \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": rpc error: code = NotFound desc = could not find container \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": container with ID starting with 8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.985149 4580 scope.go:117] "RemoveContainer" containerID="06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.985982 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} err="failed to get container status \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": rpc error: code = NotFound desc = could not find container \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": container with ID starting with 06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.986007 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.986810 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} err="failed to get container status \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": rpc error: code = NotFound desc = could not find container \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": container with ID starting with 20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.986880 4580 scope.go:117] "RemoveContainer" containerID="00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.987396 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} err="failed to get container status \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": rpc error: code = NotFound desc = could not find container \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": container with ID starting with 00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.987428 4580 scope.go:117] "RemoveContainer" containerID="fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.987730 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} err="failed to get container status \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": rpc error: code = NotFound desc = could not find container \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": container with ID starting with fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.987751 4580 scope.go:117] "RemoveContainer" containerID="381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.988047 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} err="failed to get container status \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": rpc error: code = NotFound desc = could not find container \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": container with ID starting with 381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.988066 4580 scope.go:117] "RemoveContainer" containerID="57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.988368 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} err="failed to get container status \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": rpc error: code = NotFound desc = could not find container \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": container with ID starting with 57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.988388 4580 scope.go:117] "RemoveContainer" containerID="4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.988585 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} err="failed to get container status \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": rpc error: code = NotFound desc = could not find container \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": container with ID starting with 4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.988604 4580 scope.go:117] "RemoveContainer" containerID="34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.989556 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} err="failed to get container status \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": rpc error: code = NotFound desc = could not find container \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": container with ID starting with 34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.989593 4580 scope.go:117] "RemoveContainer" containerID="18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.990442 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} err="failed to get container status \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": rpc error: code = NotFound desc = could not find container \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": container with ID starting with 18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.990465 4580 scope.go:117] "RemoveContainer" containerID="8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.990942 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} err="failed to get container status \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": rpc error: code = NotFound desc = could not find container \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": container with ID starting with 8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.990964 4580 scope.go:117] "RemoveContainer" containerID="06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.991337 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06"} err="failed to get container status \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": rpc error: code = NotFound desc = could not find container \"06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06\": container with ID starting with 06674059f95b3e6280ce8ca74d479316a4655ccb75db826a600f5cf78794eb06 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.991387 4580 scope.go:117] "RemoveContainer" containerID="20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.991791 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98"} err="failed to get container status \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": rpc error: code = NotFound desc = could not find container \"20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98\": container with ID starting with 20f47854f29c7f82bcbae567770052204b7fa2c092168c57ef54e14218812b98 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.991841 4580 scope.go:117] "RemoveContainer" containerID="00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.992263 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5"} err="failed to get container status \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": rpc error: code = NotFound desc = could not find container \"00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5\": container with ID starting with 00ff7f6b5ad3d1798e88f127c9bf71095fcbdfcf8f4338afa385717f1564ebf5 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.992289 4580 scope.go:117] "RemoveContainer" containerID="fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.992730 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685"} err="failed to get container status \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": rpc error: code = NotFound desc = could not find container \"fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685\": container with ID starting with fc26f2fe9c241fc3ede61426abd140792056fe45e03192531431303ac9669685 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.992758 4580 scope.go:117] "RemoveContainer" containerID="381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.993055 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a"} err="failed to get container status \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": rpc error: code = NotFound desc = could not find container \"381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a\": container with ID starting with 381c313bb77deef21772fc32104aec4c0325e3493c641e2bf615bd897e58c71a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.993137 4580 scope.go:117] "RemoveContainer" containerID="57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.993995 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db"} err="failed to get container status \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": rpc error: code = NotFound desc = could not find container \"57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db\": container with ID starting with 57fdd89443f292661ae2a8f73016f4a7f2889c08ffebd55d67ada2590b4344db not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.994048 4580 scope.go:117] "RemoveContainer" containerID="4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.994468 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a"} err="failed to get container status \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": rpc error: code = NotFound desc = could not find container \"4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a\": container with ID starting with 4fac5585e690495e9f154b99e6a05f94dd617a57d0826867644b56df00697b9a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.994489 4580 scope.go:117] "RemoveContainer" containerID="34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.994890 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187"} err="failed to get container status \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": rpc error: code = NotFound desc = could not find container \"34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187\": container with ID starting with 34ac8df759fbebae467ffd8c178ca19221cefd5f3c1aa999cd23e5d1e53a6187 not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.994906 4580 scope.go:117] "RemoveContainer" containerID="18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.995355 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a"} err="failed to get container status \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": rpc error: code = NotFound desc = could not find container \"18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a\": container with ID starting with 18b37c3b2535deee762ef305825de0a884e9088e57a34910ad2fcdaeb2d49d9a not found: ID does not exist" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.995372 4580 scope.go:117] "RemoveContainer" containerID="8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b" Jan 12 13:15:26 crc kubenswrapper[4580]: I0112 13:15:26.995676 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b"} err="failed to get container status \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": rpc error: code = NotFound desc = could not find container \"8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b\": container with ID starting with 8ea8f8c492e0c30d171b9b05aa00966402c80f973de31557a1e13e16eb0c447b not found: ID does not exist" Jan 12 13:15:27 crc kubenswrapper[4580]: I0112 13:15:27.293215 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4e0810-eddb-47f5-a7dc-beed7b545112" path="/var/lib/kubelet/pods/fd4e0810-eddb-47f5-a7dc-beed7b545112/volumes" Jan 12 13:15:27 crc kubenswrapper[4580]: I0112 13:15:27.818924 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/2.log" Jan 12 13:15:27 crc kubenswrapper[4580]: I0112 13:15:27.823975 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"6fdb3d5b0c8dfdbf3af301ae431e8416c6d0c06b4d957d503c1c67e306eb2633"} Jan 12 13:15:27 crc kubenswrapper[4580]: I0112 13:15:27.824046 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"58f0dbd088a9b93ac2902501224cce1a7413e4fcd85815263216b8a6d09e9869"} Jan 12 13:15:27 crc kubenswrapper[4580]: I0112 13:15:27.824059 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"ca1f4baa1145d0f869d1f1da39b3dc3b2de2c90f9b21a0e842d90cb4e354cdc9"} Jan 12 13:15:27 crc kubenswrapper[4580]: I0112 13:15:27.824071 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"8046a006376692659f755badb2ce513a93c17ab2567eb619a359d1186a70c3bb"} Jan 12 13:15:27 crc kubenswrapper[4580]: I0112 13:15:27.824081 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"6b7c7649611d2a15266b64cfba9455c1591ef6e75e6811eadf5c2e21cc08e07d"} Jan 12 13:15:27 crc kubenswrapper[4580]: I0112 13:15:27.824095 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"d00eee39a222142c96c150270a1255d5a79612b221a12d5ada71c963aa703cc2"} Jan 12 13:15:29 crc kubenswrapper[4580]: I0112 13:15:29.838763 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"9d1f285fac1802f1a1a023c902359b24e4fa62f36573157e306fa720e3ba850f"} Jan 12 13:15:31 crc kubenswrapper[4580]: I0112 13:15:31.856436 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" event={"ID":"e281c805-e75c-43ce-a686-638f1b681c9a","Type":"ContainerStarted","Data":"9e6a20c92671fece106692aa2a2091a9c44f9c24bf8371db81850bcb6b1b4aae"} Jan 12 13:15:31 crc kubenswrapper[4580]: I0112 13:15:31.857199 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:31 crc kubenswrapper[4580]: I0112 13:15:31.857218 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:31 crc kubenswrapper[4580]: I0112 13:15:31.857228 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:31 crc kubenswrapper[4580]: I0112 13:15:31.890745 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" podStartSLOduration=5.890729562 podStartE2EDuration="5.890729562s" podCreationTimestamp="2026-01-12 13:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:15:31.886720812 +0000 UTC m=+530.930939502" watchObservedRunningTime="2026-01-12 13:15:31.890729562 +0000 UTC m=+530.934948252" Jan 12 13:15:31 crc kubenswrapper[4580]: I0112 13:15:31.891728 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:31 crc kubenswrapper[4580]: I0112 13:15:31.893892 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:39 crc kubenswrapper[4580]: I0112 13:15:39.281926 4580 scope.go:117] "RemoveContainer" containerID="7e42cabcc8a0320fd9f67cb6f070b5827db98797bcde87f1d01d047fc0ed0086" Jan 12 13:15:39 crc kubenswrapper[4580]: E0112 13:15:39.282677 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nnz5s_openshift-multus(c8f39bcc-5a25-4746-988b-2251fd1be8c9)\"" pod="openshift-multus/multus-nnz5s" podUID="c8f39bcc-5a25-4746-988b-2251fd1be8c9" Jan 12 13:15:51 crc kubenswrapper[4580]: I0112 13:15:51.283122 4580 scope.go:117] "RemoveContainer" containerID="7e42cabcc8a0320fd9f67cb6f070b5827db98797bcde87f1d01d047fc0ed0086" Jan 12 13:15:51 crc kubenswrapper[4580]: I0112 13:15:51.964652 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/2.log" Jan 12 13:15:51 crc kubenswrapper[4580]: I0112 13:15:51.964999 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nnz5s" event={"ID":"c8f39bcc-5a25-4746-988b-2251fd1be8c9","Type":"ContainerStarted","Data":"5dbbae0f18d933906d2c1f682c7e375fcac33b09421850b181082e4651d9af5d"} Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.120350 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx"] Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.121666 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.124230 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.131948 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx"] Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.189558 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.189618 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.189655 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds5vl\" (UniqueName: \"kubernetes.io/projected/0089b37b-5f6c-4719-98a0-169570a8cfa6-kube-api-access-ds5vl\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.290604 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.290652 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.290681 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds5vl\" (UniqueName: \"kubernetes.io/projected/0089b37b-5f6c-4719-98a0-169570a8cfa6-kube-api-access-ds5vl\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.291323 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.291476 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.307322 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds5vl\" (UniqueName: \"kubernetes.io/projected/0089b37b-5f6c-4719-98a0-169570a8cfa6-kube-api-access-ds5vl\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.437248 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.591293 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx"] Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.984076 4580 generic.go:334] "Generic (PLEG): container finished" podID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerID="71b397d29104e49894fa4fbb0a29e55b5d2d9b4b00f5b1841774abb05d4960ad" exitCode=0 Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.984139 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" event={"ID":"0089b37b-5f6c-4719-98a0-169570a8cfa6","Type":"ContainerDied","Data":"71b397d29104e49894fa4fbb0a29e55b5d2d9b4b00f5b1841774abb05d4960ad"} Jan 12 13:15:55 crc kubenswrapper[4580]: I0112 13:15:55.984338 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" event={"ID":"0089b37b-5f6c-4719-98a0-169570a8cfa6","Type":"ContainerStarted","Data":"0cd205a99586c3447d75ccd5ec915c8882ee6fc2b67448dca75a8e59023a1417"} Jan 12 13:15:56 crc kubenswrapper[4580]: I0112 13:15:56.519446 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9pb6d" Jan 12 13:15:57 crc kubenswrapper[4580]: I0112 13:15:57.997625 4580 generic.go:334] "Generic (PLEG): container finished" podID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerID="39bcd6e39af445d2077bd2d2eb754c72e482c9b3141f7b0988cf92ac990b3746" exitCode=0 Jan 12 13:15:57 crc kubenswrapper[4580]: I0112 13:15:57.997694 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" event={"ID":"0089b37b-5f6c-4719-98a0-169570a8cfa6","Type":"ContainerDied","Data":"39bcd6e39af445d2077bd2d2eb754c72e482c9b3141f7b0988cf92ac990b3746"} Jan 12 13:15:59 crc kubenswrapper[4580]: I0112 13:15:59.005968 4580 generic.go:334] "Generic (PLEG): container finished" podID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerID="bd2b90697759d27b82900938aaec1ac93d73e6600b98ae8a19e4e3ae7cd5cad6" exitCode=0 Jan 12 13:15:59 crc kubenswrapper[4580]: I0112 13:15:59.006244 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" event={"ID":"0089b37b-5f6c-4719-98a0-169570a8cfa6","Type":"ContainerDied","Data":"bd2b90697759d27b82900938aaec1ac93d73e6600b98ae8a19e4e3ae7cd5cad6"} Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.214333 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.244944 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds5vl\" (UniqueName: \"kubernetes.io/projected/0089b37b-5f6c-4719-98a0-169570a8cfa6-kube-api-access-ds5vl\") pod \"0089b37b-5f6c-4719-98a0-169570a8cfa6\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.245001 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-bundle\") pod \"0089b37b-5f6c-4719-98a0-169570a8cfa6\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.245028 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-util\") pod \"0089b37b-5f6c-4719-98a0-169570a8cfa6\" (UID: \"0089b37b-5f6c-4719-98a0-169570a8cfa6\") " Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.248081 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-bundle" (OuterVolumeSpecName: "bundle") pod "0089b37b-5f6c-4719-98a0-169570a8cfa6" (UID: "0089b37b-5f6c-4719-98a0-169570a8cfa6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.249637 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0089b37b-5f6c-4719-98a0-169570a8cfa6-kube-api-access-ds5vl" (OuterVolumeSpecName: "kube-api-access-ds5vl") pod "0089b37b-5f6c-4719-98a0-169570a8cfa6" (UID: "0089b37b-5f6c-4719-98a0-169570a8cfa6"). InnerVolumeSpecName "kube-api-access-ds5vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.255479 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-util" (OuterVolumeSpecName: "util") pod "0089b37b-5f6c-4719-98a0-169570a8cfa6" (UID: "0089b37b-5f6c-4719-98a0-169570a8cfa6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.346124 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds5vl\" (UniqueName: \"kubernetes.io/projected/0089b37b-5f6c-4719-98a0-169570a8cfa6-kube-api-access-ds5vl\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.346168 4580 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:00 crc kubenswrapper[4580]: I0112 13:16:00.346181 4580 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0089b37b-5f6c-4719-98a0-169570a8cfa6-util\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:01 crc kubenswrapper[4580]: I0112 13:16:01.018540 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" event={"ID":"0089b37b-5f6c-4719-98a0-169570a8cfa6","Type":"ContainerDied","Data":"0cd205a99586c3447d75ccd5ec915c8882ee6fc2b67448dca75a8e59023a1417"} Jan 12 13:16:01 crc kubenswrapper[4580]: I0112 13:16:01.018838 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd205a99586c3447d75ccd5ec915c8882ee6fc2b67448dca75a8e59023a1417" Jan 12 13:16:01 crc kubenswrapper[4580]: I0112 13:16:01.018631 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.006032 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p62jb"] Jan 12 13:16:03 crc kubenswrapper[4580]: E0112 13:16:03.006282 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerName="util" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.006299 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerName="util" Jan 12 13:16:03 crc kubenswrapper[4580]: E0112 13:16:03.006313 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerName="pull" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.006318 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerName="pull" Jan 12 13:16:03 crc kubenswrapper[4580]: E0112 13:16:03.006326 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerName="extract" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.006333 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerName="extract" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.006433 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0089b37b-5f6c-4719-98a0-169570a8cfa6" containerName="extract" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.006829 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-p62jb" Jan 12 13:16:03 crc kubenswrapper[4580]: W0112 13:16:03.008384 4580 reflector.go:561] object-"openshift-nmstate"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Jan 12 13:16:03 crc kubenswrapper[4580]: E0112 13:16:03.008427 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:16:03 crc kubenswrapper[4580]: W0112 13:16:03.008499 4580 reflector.go:561] object-"openshift-nmstate"/"nmstate-operator-dockercfg-khqmk": failed to list *v1.Secret: secrets "nmstate-operator-dockercfg-khqmk" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Jan 12 13:16:03 crc kubenswrapper[4580]: E0112 13:16:03.008582 4580 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-operator-dockercfg-khqmk\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nmstate-operator-dockercfg-khqmk\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.008857 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.020394 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p62jb"] Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.074240 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzq6s\" (UniqueName: \"kubernetes.io/projected/49e54acb-8939-4c86-b9a4-42741a3356ac-kube-api-access-qzq6s\") pod \"nmstate-operator-6769fb99d-p62jb\" (UID: \"49e54acb-8939-4c86-b9a4-42741a3356ac\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p62jb" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.175516 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzq6s\" (UniqueName: \"kubernetes.io/projected/49e54acb-8939-4c86-b9a4-42741a3356ac-kube-api-access-qzq6s\") pod \"nmstate-operator-6769fb99d-p62jb\" (UID: \"49e54acb-8939-4c86-b9a4-42741a3356ac\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p62jb" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.861915 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 12 13:16:03 crc kubenswrapper[4580]: I0112 13:16:03.874349 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzq6s\" (UniqueName: \"kubernetes.io/projected/49e54acb-8939-4c86-b9a4-42741a3356ac-kube-api-access-qzq6s\") pod \"nmstate-operator-6769fb99d-p62jb\" (UID: \"49e54acb-8939-4c86-b9a4-42741a3356ac\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p62jb" Jan 12 13:16:04 crc kubenswrapper[4580]: I0112 13:16:04.155009 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-khqmk" Jan 12 13:16:04 crc kubenswrapper[4580]: I0112 13:16:04.162510 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-p62jb" Jan 12 13:16:04 crc kubenswrapper[4580]: I0112 13:16:04.330395 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p62jb"] Jan 12 13:16:05 crc kubenswrapper[4580]: I0112 13:16:05.040000 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-p62jb" event={"ID":"49e54acb-8939-4c86-b9a4-42741a3356ac","Type":"ContainerStarted","Data":"b36d8f0814970226c42bbd67f39bc80cf1cd40585c49bff7e8e9e5e47ca627b9"} Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.049942 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-p62jb" event={"ID":"49e54acb-8939-4c86-b9a4-42741a3356ac","Type":"ContainerStarted","Data":"078f1ec4b63eb4ee71cc945680953a9b4e2159dfa828c072bcff7746ed33e3c6"} Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.064745 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-p62jb" podStartSLOduration=2.48842761 podStartE2EDuration="5.06473167s" podCreationTimestamp="2026-01-12 13:16:02 +0000 UTC" firstStartedPulling="2026-01-12 13:16:04.329526855 +0000 UTC m=+563.373745545" lastFinishedPulling="2026-01-12 13:16:06.905830915 +0000 UTC m=+565.950049605" observedRunningTime="2026-01-12 13:16:07.061461969 +0000 UTC m=+566.105680659" watchObservedRunningTime="2026-01-12 13:16:07.06473167 +0000 UTC m=+566.108950360" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.826503 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5"] Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.827714 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.829318 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-28jg4" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.836227 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk"] Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.837007 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.838399 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.839484 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5"] Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.862937 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk"] Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.881707 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-66q7w"] Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.883931 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.931481 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpl8z\" (UniqueName: \"kubernetes.io/projected/a7c37982-a0fd-4f9d-950a-ec589bb9753c-kube-api-access-tpl8z\") pod \"nmstate-metrics-7f7f7578db-t97w5\" (UID: \"a7c37982-a0fd-4f9d-950a-ec589bb9753c\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.938165 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg"] Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.938992 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.944728 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg"] Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.947525 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-w6vxw" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.947549 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 12 13:16:07 crc kubenswrapper[4580]: I0112 13:16:07.947719 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033236 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpf99\" (UniqueName: \"kubernetes.io/projected/4ce8457b-77a4-4703-b3e8-2a929d02d38d-kube-api-access-mpf99\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033292 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xhk\" (UniqueName: \"kubernetes.io/projected/714937bd-e28b-4368-8f23-c141e40ea81f-kube-api-access-95xhk\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033316 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ce8457b-77a4-4703-b3e8-2a929d02d38d-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033335 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-ovs-socket\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033376 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/00b7df68-abb5-4b70-b6ef-1495cb7a4725-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-b4qbk\" (UID: \"00b7df68-abb5-4b70-b6ef-1495cb7a4725\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033394 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-dbus-socket\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033414 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-nmstate-lock\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033445 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpl8z\" (UniqueName: \"kubernetes.io/projected/a7c37982-a0fd-4f9d-950a-ec589bb9753c-kube-api-access-tpl8z\") pod \"nmstate-metrics-7f7f7578db-t97w5\" (UID: \"a7c37982-a0fd-4f9d-950a-ec589bb9753c\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033481 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce8457b-77a4-4703-b3e8-2a929d02d38d-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.033502 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8kt\" (UniqueName: \"kubernetes.io/projected/00b7df68-abb5-4b70-b6ef-1495cb7a4725-kube-api-access-mc8kt\") pod \"nmstate-webhook-f8fb84555-b4qbk\" (UID: \"00b7df68-abb5-4b70-b6ef-1495cb7a4725\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.054234 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpl8z\" (UniqueName: \"kubernetes.io/projected/a7c37982-a0fd-4f9d-950a-ec589bb9753c-kube-api-access-tpl8z\") pod \"nmstate-metrics-7f7f7578db-t97w5\" (UID: \"a7c37982-a0fd-4f9d-950a-ec589bb9753c\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.104895 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c7595d455-fjm6w"] Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.105558 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.120874 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c7595d455-fjm6w"] Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134322 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpf99\" (UniqueName: \"kubernetes.io/projected/4ce8457b-77a4-4703-b3e8-2a929d02d38d-kube-api-access-mpf99\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134477 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95xhk\" (UniqueName: \"kubernetes.io/projected/714937bd-e28b-4368-8f23-c141e40ea81f-kube-api-access-95xhk\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134507 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ce8457b-77a4-4703-b3e8-2a929d02d38d-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134527 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-ovs-socket\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134553 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/00b7df68-abb5-4b70-b6ef-1495cb7a4725-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-b4qbk\" (UID: \"00b7df68-abb5-4b70-b6ef-1495cb7a4725\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134571 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-dbus-socket\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134588 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-nmstate-lock\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134627 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce8457b-77a4-4703-b3e8-2a929d02d38d-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134655 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8kt\" (UniqueName: \"kubernetes.io/projected/00b7df68-abb5-4b70-b6ef-1495cb7a4725-kube-api-access-mc8kt\") pod \"nmstate-webhook-f8fb84555-b4qbk\" (UID: \"00b7df68-abb5-4b70-b6ef-1495cb7a4725\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.134807 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-ovs-socket\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.135082 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-dbus-socket\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.135211 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/714937bd-e28b-4368-8f23-c141e40ea81f-nmstate-lock\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.135306 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ce8457b-77a4-4703-b3e8-2a929d02d38d-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.137570 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce8457b-77a4-4703-b3e8-2a929d02d38d-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.141609 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/00b7df68-abb5-4b70-b6ef-1495cb7a4725-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-b4qbk\" (UID: \"00b7df68-abb5-4b70-b6ef-1495cb7a4725\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.146147 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.153632 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpf99\" (UniqueName: \"kubernetes.io/projected/4ce8457b-77a4-4703-b3e8-2a929d02d38d-kube-api-access-mpf99\") pod \"nmstate-console-plugin-6ff7998486-sngpg\" (UID: \"4ce8457b-77a4-4703-b3e8-2a929d02d38d\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.154658 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95xhk\" (UniqueName: \"kubernetes.io/projected/714937bd-e28b-4368-8f23-c141e40ea81f-kube-api-access-95xhk\") pod \"nmstate-handler-66q7w\" (UID: \"714937bd-e28b-4368-8f23-c141e40ea81f\") " pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.159374 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8kt\" (UniqueName: \"kubernetes.io/projected/00b7df68-abb5-4b70-b6ef-1495cb7a4725-kube-api-access-mc8kt\") pod \"nmstate-webhook-f8fb84555-b4qbk\" (UID: \"00b7df68-abb5-4b70-b6ef-1495cb7a4725\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.201129 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:08 crc kubenswrapper[4580]: W0112 13:16:08.226385 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod714937bd_e28b_4368_8f23_c141e40ea81f.slice/crio-c8621bd55657f7bf119ac246c4a5515b45c05a8cb9956eeb85920bced32d84f3 WatchSource:0}: Error finding container c8621bd55657f7bf119ac246c4a5515b45c05a8cb9956eeb85920bced32d84f3: Status 404 returned error can't find the container with id c8621bd55657f7bf119ac246c4a5515b45c05a8cb9956eeb85920bced32d84f3 Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.237462 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-oauth-config\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.237534 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-trusted-ca-bundle\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.237616 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-serving-cert\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.237697 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcpwd\" (UniqueName: \"kubernetes.io/projected/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-kube-api-access-dcpwd\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.237738 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-config\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.237785 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-oauth-serving-cert\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.237841 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-service-ca\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.250578 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.333944 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5"] Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.339686 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcpwd\" (UniqueName: \"kubernetes.io/projected/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-kube-api-access-dcpwd\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.339733 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-config\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.339765 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-oauth-serving-cert\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.339791 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-service-ca\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.339813 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-oauth-config\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.339837 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-trusted-ca-bundle\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.339870 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-serving-cert\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.343404 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-config\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.344035 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-serving-cert\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.344338 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-service-ca\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.346468 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-trusted-ca-bundle\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.346638 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-oauth-serving-cert\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.346811 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-console-oauth-config\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.358010 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcpwd\" (UniqueName: \"kubernetes.io/projected/bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b-kube-api-access-dcpwd\") pod \"console-5c7595d455-fjm6w\" (UID: \"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b\") " pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.416674 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.451851 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.596220 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c7595d455-fjm6w"] Jan 12 13:16:08 crc kubenswrapper[4580]: W0112 13:16:08.614180 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdc9fcdd_ddd4_4ef3_a82f_16eeab99921b.slice/crio-47fee72b3540fbf54a6faffb1761c2284a6cc5b59bc86b60d13f2abb818a65bd WatchSource:0}: Error finding container 47fee72b3540fbf54a6faffb1761c2284a6cc5b59bc86b60d13f2abb818a65bd: Status 404 returned error can't find the container with id 47fee72b3540fbf54a6faffb1761c2284a6cc5b59bc86b60d13f2abb818a65bd Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.622767 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg"] Jan 12 13:16:08 crc kubenswrapper[4580]: W0112 13:16:08.625360 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ce8457b_77a4_4703_b3e8_2a929d02d38d.slice/crio-41d5f6f83f869f143f0e1d1a1ab16c1804d4035d4d3a0b7b8c4b1ea57ef10314 WatchSource:0}: Error finding container 41d5f6f83f869f143f0e1d1a1ab16c1804d4035d4d3a0b7b8c4b1ea57ef10314: Status 404 returned error can't find the container with id 41d5f6f83f869f143f0e1d1a1ab16c1804d4035d4d3a0b7b8c4b1ea57ef10314 Jan 12 13:16:08 crc kubenswrapper[4580]: I0112 13:16:08.833242 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk"] Jan 12 13:16:08 crc kubenswrapper[4580]: W0112 13:16:08.836230 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00b7df68_abb5_4b70_b6ef_1495cb7a4725.slice/crio-42dee1d572ea2db58d51437755a811dbd6cc1381cafb823529581f8a06bd31ec WatchSource:0}: Error finding container 42dee1d572ea2db58d51437755a811dbd6cc1381cafb823529581f8a06bd31ec: Status 404 returned error can't find the container with id 42dee1d572ea2db58d51437755a811dbd6cc1381cafb823529581f8a06bd31ec Jan 12 13:16:09 crc kubenswrapper[4580]: I0112 13:16:09.065240 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" event={"ID":"00b7df68-abb5-4b70-b6ef-1495cb7a4725","Type":"ContainerStarted","Data":"42dee1d572ea2db58d51437755a811dbd6cc1381cafb823529581f8a06bd31ec"} Jan 12 13:16:09 crc kubenswrapper[4580]: I0112 13:16:09.066570 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-66q7w" event={"ID":"714937bd-e28b-4368-8f23-c141e40ea81f","Type":"ContainerStarted","Data":"c8621bd55657f7bf119ac246c4a5515b45c05a8cb9956eeb85920bced32d84f3"} Jan 12 13:16:09 crc kubenswrapper[4580]: I0112 13:16:09.068352 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" event={"ID":"4ce8457b-77a4-4703-b3e8-2a929d02d38d","Type":"ContainerStarted","Data":"41d5f6f83f869f143f0e1d1a1ab16c1804d4035d4d3a0b7b8c4b1ea57ef10314"} Jan 12 13:16:09 crc kubenswrapper[4580]: I0112 13:16:09.070322 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c7595d455-fjm6w" event={"ID":"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b","Type":"ContainerStarted","Data":"09a23ef45ef1cd6b0afb8160ee3915bce319b241a57076cd9ab78c4bcd433a45"} Jan 12 13:16:09 crc kubenswrapper[4580]: I0112 13:16:09.070357 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c7595d455-fjm6w" event={"ID":"bdc9fcdd-ddd4-4ef3-a82f-16eeab99921b","Type":"ContainerStarted","Data":"47fee72b3540fbf54a6faffb1761c2284a6cc5b59bc86b60d13f2abb818a65bd"} Jan 12 13:16:09 crc kubenswrapper[4580]: I0112 13:16:09.072620 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" event={"ID":"a7c37982-a0fd-4f9d-950a-ec589bb9753c","Type":"ContainerStarted","Data":"c623359982615d59e0b699407f80bbc6bf3b17c35e13febb86c54868f35089dd"} Jan 12 13:16:09 crc kubenswrapper[4580]: I0112 13:16:09.090125 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c7595d455-fjm6w" podStartSLOduration=1.090077828 podStartE2EDuration="1.090077828s" podCreationTimestamp="2026-01-12 13:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:16:09.084014242 +0000 UTC m=+568.128232942" watchObservedRunningTime="2026-01-12 13:16:09.090077828 +0000 UTC m=+568.134296518" Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.095796 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" event={"ID":"00b7df68-abb5-4b70-b6ef-1495cb7a4725","Type":"ContainerStarted","Data":"9689f226494fce4df8e937a508fe986d29d16668ce8bbc870e47559274b971e9"} Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.096277 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.099015 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-66q7w" event={"ID":"714937bd-e28b-4368-8f23-c141e40ea81f","Type":"ContainerStarted","Data":"fd7494210d488875b3fcf64fc45d2ba1f7f20399aa103c822ca1f821071e07a6"} Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.099168 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.100792 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" event={"ID":"4ce8457b-77a4-4703-b3e8-2a929d02d38d","Type":"ContainerStarted","Data":"5549305b5debdfc7a8173c4b6eb539231329c60b32741dacc2fa574cf7b3ce4d"} Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.102562 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" event={"ID":"a7c37982-a0fd-4f9d-950a-ec589bb9753c","Type":"ContainerStarted","Data":"06a49b7831446b5c9ef89c3a32932a43b6e2027507b1ae0d8ae746f53d3cfca1"} Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.114797 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" podStartSLOduration=2.779311776 podStartE2EDuration="5.114784448s" podCreationTimestamp="2026-01-12 13:16:07 +0000 UTC" firstStartedPulling="2026-01-12 13:16:08.83812531 +0000 UTC m=+567.882343999" lastFinishedPulling="2026-01-12 13:16:11.173597981 +0000 UTC m=+570.217816671" observedRunningTime="2026-01-12 13:16:12.113191882 +0000 UTC m=+571.157410573" watchObservedRunningTime="2026-01-12 13:16:12.114784448 +0000 UTC m=+571.159003139" Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.130120 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-66q7w" podStartSLOduration=2.184659461 podStartE2EDuration="5.130088457s" podCreationTimestamp="2026-01-12 13:16:07 +0000 UTC" firstStartedPulling="2026-01-12 13:16:08.231165511 +0000 UTC m=+567.275384201" lastFinishedPulling="2026-01-12 13:16:11.176594508 +0000 UTC m=+570.220813197" observedRunningTime="2026-01-12 13:16:12.126620593 +0000 UTC m=+571.170839283" watchObservedRunningTime="2026-01-12 13:16:12.130088457 +0000 UTC m=+571.174307137" Jan 12 13:16:12 crc kubenswrapper[4580]: I0112 13:16:12.140505 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-sngpg" podStartSLOduration=2.594774137 podStartE2EDuration="5.140483803s" podCreationTimestamp="2026-01-12 13:16:07 +0000 UTC" firstStartedPulling="2026-01-12 13:16:08.633275497 +0000 UTC m=+567.677494188" lastFinishedPulling="2026-01-12 13:16:11.178985165 +0000 UTC m=+570.223203854" observedRunningTime="2026-01-12 13:16:12.137182522 +0000 UTC m=+571.181401212" watchObservedRunningTime="2026-01-12 13:16:12.140483803 +0000 UTC m=+571.184702492" Jan 12 13:16:14 crc kubenswrapper[4580]: I0112 13:16:14.116932 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" event={"ID":"a7c37982-a0fd-4f9d-950a-ec589bb9753c","Type":"ContainerStarted","Data":"25fa445eb36a2be34032c74fe796379673a5c9100e0ebeb5770f790444562878"} Jan 12 13:16:14 crc kubenswrapper[4580]: I0112 13:16:14.131081 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-t97w5" podStartSLOduration=2.113184616 podStartE2EDuration="7.131059432s" podCreationTimestamp="2026-01-12 13:16:07 +0000 UTC" firstStartedPulling="2026-01-12 13:16:08.347327969 +0000 UTC m=+567.391546659" lastFinishedPulling="2026-01-12 13:16:13.365202785 +0000 UTC m=+572.409421475" observedRunningTime="2026-01-12 13:16:14.129302958 +0000 UTC m=+573.173521648" watchObservedRunningTime="2026-01-12 13:16:14.131059432 +0000 UTC m=+573.175278123" Jan 12 13:16:16 crc kubenswrapper[4580]: I0112 13:16:16.949767 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:16:16 crc kubenswrapper[4580]: I0112 13:16:16.949853 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:16:18 crc kubenswrapper[4580]: I0112 13:16:18.226296 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-66q7w" Jan 12 13:16:18 crc kubenswrapper[4580]: I0112 13:16:18.417787 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:18 crc kubenswrapper[4580]: I0112 13:16:18.417854 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:18 crc kubenswrapper[4580]: I0112 13:16:18.422873 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:19 crc kubenswrapper[4580]: I0112 13:16:19.146880 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c7595d455-fjm6w" Jan 12 13:16:19 crc kubenswrapper[4580]: I0112 13:16:19.183917 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5tdwv"] Jan 12 13:16:28 crc kubenswrapper[4580]: I0112 13:16:28.457189 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-b4qbk" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.071949 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866"] Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.073426 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.075003 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.078311 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866"] Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.112565 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.112635 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.112703 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbhh\" (UniqueName: \"kubernetes.io/projected/3b6ceadd-6368-43ec-9666-7dff30d5ee95-kube-api-access-lvbhh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.214318 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.214359 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.214398 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbhh\" (UniqueName: \"kubernetes.io/projected/3b6ceadd-6368-43ec-9666-7dff30d5ee95-kube-api-access-lvbhh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.214905 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.214963 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.231608 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbhh\" (UniqueName: \"kubernetes.io/projected/3b6ceadd-6368-43ec-9666-7dff30d5ee95-kube-api-access-lvbhh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.389017 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:40 crc kubenswrapper[4580]: I0112 13:16:40.766554 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866"] Jan 12 13:16:41 crc kubenswrapper[4580]: I0112 13:16:41.281054 4580 generic.go:334] "Generic (PLEG): container finished" podID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerID="972b1b9772782ef2b693ec91e3dc1e37a011e999c0c33891234fda7ee8bf5b85" exitCode=0 Jan 12 13:16:41 crc kubenswrapper[4580]: I0112 13:16:41.289810 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" event={"ID":"3b6ceadd-6368-43ec-9666-7dff30d5ee95","Type":"ContainerDied","Data":"972b1b9772782ef2b693ec91e3dc1e37a011e999c0c33891234fda7ee8bf5b85"} Jan 12 13:16:41 crc kubenswrapper[4580]: I0112 13:16:41.289861 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" event={"ID":"3b6ceadd-6368-43ec-9666-7dff30d5ee95","Type":"ContainerStarted","Data":"ebc1125d078f6ba6c01a5c4ebfa20a4ee115a47e713b8d57d2b5c061c602ea17"} Jan 12 13:16:43 crc kubenswrapper[4580]: I0112 13:16:43.292513 4580 generic.go:334] "Generic (PLEG): container finished" podID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerID="787bc69724fa5ceb7a26f51b6125196eea970b40c6de7f81191b7ee31f1b8063" exitCode=0 Jan 12 13:16:43 crc kubenswrapper[4580]: I0112 13:16:43.292611 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" event={"ID":"3b6ceadd-6368-43ec-9666-7dff30d5ee95","Type":"ContainerDied","Data":"787bc69724fa5ceb7a26f51b6125196eea970b40c6de7f81191b7ee31f1b8063"} Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.214033 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5tdwv" podUID="90150eba-9b4f-485f-97c3-89d410cb5851" containerName="console" containerID="cri-o://50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd" gracePeriod=15 Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.299764 4580 generic.go:334] "Generic (PLEG): container finished" podID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerID="1789f59fb9ff616a1748194ddfda61a66fd275927a2c5487953483aa63c03a60" exitCode=0 Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.299828 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" event={"ID":"3b6ceadd-6368-43ec-9666-7dff30d5ee95","Type":"ContainerDied","Data":"1789f59fb9ff616a1748194ddfda61a66fd275927a2c5487953483aa63c03a60"} Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.494941 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5tdwv_90150eba-9b4f-485f-97c3-89d410cb5851/console/0.log" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.494994 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.561165 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-oauth-config\") pod \"90150eba-9b4f-485f-97c3-89d410cb5851\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.561218 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-serving-cert\") pod \"90150eba-9b4f-485f-97c3-89d410cb5851\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.561242 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-oauth-serving-cert\") pod \"90150eba-9b4f-485f-97c3-89d410cb5851\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.561270 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-service-ca\") pod \"90150eba-9b4f-485f-97c3-89d410cb5851\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.561288 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-console-config\") pod \"90150eba-9b4f-485f-97c3-89d410cb5851\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.561303 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbflp\" (UniqueName: \"kubernetes.io/projected/90150eba-9b4f-485f-97c3-89d410cb5851-kube-api-access-xbflp\") pod \"90150eba-9b4f-485f-97c3-89d410cb5851\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.561318 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-trusted-ca-bundle\") pod \"90150eba-9b4f-485f-97c3-89d410cb5851\" (UID: \"90150eba-9b4f-485f-97c3-89d410cb5851\") " Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.562129 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-service-ca" (OuterVolumeSpecName: "service-ca") pod "90150eba-9b4f-485f-97c3-89d410cb5851" (UID: "90150eba-9b4f-485f-97c3-89d410cb5851"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.562166 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-console-config" (OuterVolumeSpecName: "console-config") pod "90150eba-9b4f-485f-97c3-89d410cb5851" (UID: "90150eba-9b4f-485f-97c3-89d410cb5851"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.562180 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "90150eba-9b4f-485f-97c3-89d410cb5851" (UID: "90150eba-9b4f-485f-97c3-89d410cb5851"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.562569 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "90150eba-9b4f-485f-97c3-89d410cb5851" (UID: "90150eba-9b4f-485f-97c3-89d410cb5851"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.576655 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90150eba-9b4f-485f-97c3-89d410cb5851-kube-api-access-xbflp" (OuterVolumeSpecName: "kube-api-access-xbflp") pod "90150eba-9b4f-485f-97c3-89d410cb5851" (UID: "90150eba-9b4f-485f-97c3-89d410cb5851"). InnerVolumeSpecName "kube-api-access-xbflp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.577187 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "90150eba-9b4f-485f-97c3-89d410cb5851" (UID: "90150eba-9b4f-485f-97c3-89d410cb5851"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.577775 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "90150eba-9b4f-485f-97c3-89d410cb5851" (UID: "90150eba-9b4f-485f-97c3-89d410cb5851"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.662815 4580 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.662853 4580 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90150eba-9b4f-485f-97c3-89d410cb5851-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.662863 4580 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.662874 4580 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-service-ca\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.662883 4580 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-console-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.662892 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbflp\" (UniqueName: \"kubernetes.io/projected/90150eba-9b4f-485f-97c3-89d410cb5851-kube-api-access-xbflp\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:44 crc kubenswrapper[4580]: I0112 13:16:44.662902 4580 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90150eba-9b4f-485f-97c3-89d410cb5851-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.304935 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5tdwv_90150eba-9b4f-485f-97c3-89d410cb5851/console/0.log" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.304984 4580 generic.go:334] "Generic (PLEG): container finished" podID="90150eba-9b4f-485f-97c3-89d410cb5851" containerID="50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd" exitCode=2 Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.305026 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5tdwv" event={"ID":"90150eba-9b4f-485f-97c3-89d410cb5851","Type":"ContainerDied","Data":"50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd"} Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.305068 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5tdwv" event={"ID":"90150eba-9b4f-485f-97c3-89d410cb5851","Type":"ContainerDied","Data":"b584e2aab43e65783d6aa1710d4056697d3a1c077fd756dabc1d53565b6ef7f6"} Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.305054 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5tdwv" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.305087 4580 scope.go:117] "RemoveContainer" containerID="50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.322542 4580 scope.go:117] "RemoveContainer" containerID="50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd" Jan 12 13:16:45 crc kubenswrapper[4580]: E0112 13:16:45.323221 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd\": container with ID starting with 50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd not found: ID does not exist" containerID="50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.323269 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd"} err="failed to get container status \"50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd\": rpc error: code = NotFound desc = could not find container \"50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd\": container with ID starting with 50659715f6d9224d99b87ca926d1de0a57ccf32b667a794e75c2abfe4ddde7bd not found: ID does not exist" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.325227 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5tdwv"] Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.328446 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5tdwv"] Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.483903 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.577128 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-util\") pod \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.577227 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-bundle\") pod \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.577259 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvbhh\" (UniqueName: \"kubernetes.io/projected/3b6ceadd-6368-43ec-9666-7dff30d5ee95-kube-api-access-lvbhh\") pod \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\" (UID: \"3b6ceadd-6368-43ec-9666-7dff30d5ee95\") " Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.578443 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-bundle" (OuterVolumeSpecName: "bundle") pod "3b6ceadd-6368-43ec-9666-7dff30d5ee95" (UID: "3b6ceadd-6368-43ec-9666-7dff30d5ee95"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.582008 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6ceadd-6368-43ec-9666-7dff30d5ee95-kube-api-access-lvbhh" (OuterVolumeSpecName: "kube-api-access-lvbhh") pod "3b6ceadd-6368-43ec-9666-7dff30d5ee95" (UID: "3b6ceadd-6368-43ec-9666-7dff30d5ee95"). InnerVolumeSpecName "kube-api-access-lvbhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.587345 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-util" (OuterVolumeSpecName: "util") pod "3b6ceadd-6368-43ec-9666-7dff30d5ee95" (UID: "3b6ceadd-6368-43ec-9666-7dff30d5ee95"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.678354 4580 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-util\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.678385 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvbhh\" (UniqueName: \"kubernetes.io/projected/3b6ceadd-6368-43ec-9666-7dff30d5ee95-kube-api-access-lvbhh\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:45 crc kubenswrapper[4580]: I0112 13:16:45.678397 4580 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b6ceadd-6368-43ec-9666-7dff30d5ee95-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:16:46 crc kubenswrapper[4580]: I0112 13:16:46.313413 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" event={"ID":"3b6ceadd-6368-43ec-9666-7dff30d5ee95","Type":"ContainerDied","Data":"ebc1125d078f6ba6c01a5c4ebfa20a4ee115a47e713b8d57d2b5c061c602ea17"} Jan 12 13:16:46 crc kubenswrapper[4580]: I0112 13:16:46.313437 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866" Jan 12 13:16:46 crc kubenswrapper[4580]: I0112 13:16:46.313457 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebc1125d078f6ba6c01a5c4ebfa20a4ee115a47e713b8d57d2b5c061c602ea17" Jan 12 13:16:46 crc kubenswrapper[4580]: I0112 13:16:46.949944 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:16:46 crc kubenswrapper[4580]: I0112 13:16:46.950331 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:16:47 crc kubenswrapper[4580]: I0112 13:16:47.287623 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90150eba-9b4f-485f-97c3-89d410cb5851" path="/var/lib/kubelet/pods/90150eba-9b4f-485f-97c3-89d410cb5851/volumes" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.595881 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs"] Jan 12 13:16:54 crc kubenswrapper[4580]: E0112 13:16:54.596667 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerName="pull" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.596682 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerName="pull" Jan 12 13:16:54 crc kubenswrapper[4580]: E0112 13:16:54.596693 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerName="extract" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.596699 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerName="extract" Jan 12 13:16:54 crc kubenswrapper[4580]: E0112 13:16:54.596708 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerName="util" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.596714 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerName="util" Jan 12 13:16:54 crc kubenswrapper[4580]: E0112 13:16:54.596720 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90150eba-9b4f-485f-97c3-89d410cb5851" containerName="console" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.596725 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="90150eba-9b4f-485f-97c3-89d410cb5851" containerName="console" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.596824 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b6ceadd-6368-43ec-9666-7dff30d5ee95" containerName="extract" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.596836 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="90150eba-9b4f-485f-97c3-89d410cb5851" containerName="console" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.597229 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.600819 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.601296 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.601445 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-v56gj" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.601862 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.601974 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.613993 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs"] Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.691875 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-webhook-cert\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.691945 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wz4z\" (UniqueName: \"kubernetes.io/projected/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-kube-api-access-8wz4z\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.691982 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-apiservice-cert\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.792995 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wz4z\" (UniqueName: \"kubernetes.io/projected/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-kube-api-access-8wz4z\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.793057 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-apiservice-cert\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.793091 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-webhook-cert\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.798876 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-webhook-cert\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.798893 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-apiservice-cert\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.815617 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wz4z\" (UniqueName: \"kubernetes.io/projected/9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b-kube-api-access-8wz4z\") pod \"metallb-operator-controller-manager-797bb7bf75-nrxgs\" (UID: \"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b\") " pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:54 crc kubenswrapper[4580]: I0112 13:16:54.915133 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.035436 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb"] Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.036325 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.039796 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.039971 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.040131 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-cpphz" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.056530 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb"] Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.134883 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs"] Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.197843 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc9a79e4-90bc-4e70-afac-8b20ec13504f-webhook-cert\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.197937 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmnzc\" (UniqueName: \"kubernetes.io/projected/cc9a79e4-90bc-4e70-afac-8b20ec13504f-kube-api-access-qmnzc\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.197971 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc9a79e4-90bc-4e70-afac-8b20ec13504f-apiservice-cert\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.299661 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmnzc\" (UniqueName: \"kubernetes.io/projected/cc9a79e4-90bc-4e70-afac-8b20ec13504f-kube-api-access-qmnzc\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.299782 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc9a79e4-90bc-4e70-afac-8b20ec13504f-apiservice-cert\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.299891 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc9a79e4-90bc-4e70-afac-8b20ec13504f-webhook-cert\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.305190 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc9a79e4-90bc-4e70-afac-8b20ec13504f-webhook-cert\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.305429 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc9a79e4-90bc-4e70-afac-8b20ec13504f-apiservice-cert\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.318730 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmnzc\" (UniqueName: \"kubernetes.io/projected/cc9a79e4-90bc-4e70-afac-8b20ec13504f-kube-api-access-qmnzc\") pod \"metallb-operator-webhook-server-66dd5b5c84-2pdlb\" (UID: \"cc9a79e4-90bc-4e70-afac-8b20ec13504f\") " pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.356323 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" event={"ID":"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b","Type":"ContainerStarted","Data":"43552f0bb131234c16dc444b1f1ca0d9984a13d68add471718eb024738d733cc"} Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.361781 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:16:55 crc kubenswrapper[4580]: I0112 13:16:55.742468 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb"] Jan 12 13:16:55 crc kubenswrapper[4580]: W0112 13:16:55.750197 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc9a79e4_90bc_4e70_afac_8b20ec13504f.slice/crio-73da33ccf9de13dc774f5f920bece23050f9308e55ab0cda1adf15e1fd2286b6 WatchSource:0}: Error finding container 73da33ccf9de13dc774f5f920bece23050f9308e55ab0cda1adf15e1fd2286b6: Status 404 returned error can't find the container with id 73da33ccf9de13dc774f5f920bece23050f9308e55ab0cda1adf15e1fd2286b6 Jan 12 13:16:56 crc kubenswrapper[4580]: I0112 13:16:56.363289 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" event={"ID":"cc9a79e4-90bc-4e70-afac-8b20ec13504f","Type":"ContainerStarted","Data":"73da33ccf9de13dc774f5f920bece23050f9308e55ab0cda1adf15e1fd2286b6"} Jan 12 13:16:58 crc kubenswrapper[4580]: I0112 13:16:58.380219 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" event={"ID":"9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b","Type":"ContainerStarted","Data":"63aa79b3b9fd0e12476002fd305090aa826c0cbd8157842fc9792ef43e0ca648"} Jan 12 13:16:58 crc kubenswrapper[4580]: I0112 13:16:58.381554 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:16:58 crc kubenswrapper[4580]: I0112 13:16:58.396016 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" podStartSLOduration=1.3744642759999999 podStartE2EDuration="4.396002264s" podCreationTimestamp="2026-01-12 13:16:54 +0000 UTC" firstStartedPulling="2026-01-12 13:16:55.144483589 +0000 UTC m=+614.188702280" lastFinishedPulling="2026-01-12 13:16:58.166021578 +0000 UTC m=+617.210240268" observedRunningTime="2026-01-12 13:16:58.393394624 +0000 UTC m=+617.437613314" watchObservedRunningTime="2026-01-12 13:16:58.396002264 +0000 UTC m=+617.440220944" Jan 12 13:17:01 crc kubenswrapper[4580]: I0112 13:17:01.399435 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" event={"ID":"cc9a79e4-90bc-4e70-afac-8b20ec13504f","Type":"ContainerStarted","Data":"df69543f235dbaedb9c76bae91e0b8b169f063ae39ce506d42ff540d4a014f53"} Jan 12 13:17:01 crc kubenswrapper[4580]: I0112 13:17:01.400028 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:17:01 crc kubenswrapper[4580]: I0112 13:17:01.415458 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" podStartSLOduration=1.80041398 podStartE2EDuration="6.415446016s" podCreationTimestamp="2026-01-12 13:16:55 +0000 UTC" firstStartedPulling="2026-01-12 13:16:55.752018977 +0000 UTC m=+614.796237667" lastFinishedPulling="2026-01-12 13:17:00.367051013 +0000 UTC m=+619.411269703" observedRunningTime="2026-01-12 13:17:01.412416554 +0000 UTC m=+620.456635244" watchObservedRunningTime="2026-01-12 13:17:01.415446016 +0000 UTC m=+620.459664707" Jan 12 13:17:15 crc kubenswrapper[4580]: I0112 13:17:15.366612 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-66dd5b5c84-2pdlb" Jan 12 13:17:16 crc kubenswrapper[4580]: I0112 13:17:16.949486 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:17:16 crc kubenswrapper[4580]: I0112 13:17:16.949871 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:17:16 crc kubenswrapper[4580]: I0112 13:17:16.949912 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:17:16 crc kubenswrapper[4580]: I0112 13:17:16.950390 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e7364093541422d6527d483a8a4570a7b048dfd23774d35d5dc7c8fcdefe657"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:17:16 crc kubenswrapper[4580]: I0112 13:17:16.950441 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://4e7364093541422d6527d483a8a4570a7b048dfd23774d35d5dc7c8fcdefe657" gracePeriod=600 Jan 12 13:17:17 crc kubenswrapper[4580]: I0112 13:17:17.508081 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="4e7364093541422d6527d483a8a4570a7b048dfd23774d35d5dc7c8fcdefe657" exitCode=0 Jan 12 13:17:17 crc kubenswrapper[4580]: I0112 13:17:17.508141 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"4e7364093541422d6527d483a8a4570a7b048dfd23774d35d5dc7c8fcdefe657"} Jan 12 13:17:17 crc kubenswrapper[4580]: I0112 13:17:17.508389 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"7850824be06012c20b6bb245ac92cc464dbe596b5ce9364073d6add3fc0a822e"} Jan 12 13:17:17 crc kubenswrapper[4580]: I0112 13:17:17.508413 4580 scope.go:117] "RemoveContainer" containerID="1689fbe54ea63924eb5436687ff3624dfc8f05694ffc76352754b1bc5a4e1401" Jan 12 13:17:34 crc kubenswrapper[4580]: I0112 13:17:34.917601 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-797bb7bf75-nrxgs" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.460491 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2n6tl"] Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.462914 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.466624 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q"] Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.467316 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.467705 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.467917 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.468274 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.470911 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bt7bp" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.492379 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q"] Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.548807 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qbcvq"] Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.549862 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.551664 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.552527 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-h9zjw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.553408 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.556448 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.559508 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83e9a47a-80d6-4d28-ae2d-da27e069932f-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-pkj9q\" (UID: \"83e9a47a-80d6-4d28-ae2d-da27e069932f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.559567 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cth75\" (UniqueName: \"kubernetes.io/projected/83e9a47a-80d6-4d28-ae2d-da27e069932f-kube-api-access-cth75\") pod \"frr-k8s-webhook-server-7784b6fcf-pkj9q\" (UID: \"83e9a47a-80d6-4d28-ae2d-da27e069932f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.559627 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4zz\" (UniqueName: \"kubernetes.io/projected/6c632190-70af-4fb2-97f2-3ea2cddf0302-kube-api-access-vb4zz\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.559680 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-metrics-certs\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.559701 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.559845 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6c632190-70af-4fb2-97f2-3ea2cddf0302-metallb-excludel2\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.575250 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-rxqqw"] Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.576191 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.583291 4580 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.587162 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-rxqqw"] Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.661843 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-metrics-certs\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662055 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-metrics\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662176 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54899288-3291-42c0-969e-f22dab071c51-metrics-certs\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662269 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4zz\" (UniqueName: \"kubernetes.io/projected/6c632190-70af-4fb2-97f2-3ea2cddf0302-kube-api-access-vb4zz\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662359 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-sockets\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662427 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-reloader\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662514 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-metrics-certs\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662587 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-startup\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662664 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662750 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvv5r\" (UniqueName: \"kubernetes.io/projected/54899288-3291-42c0-969e-f22dab071c51-kube-api-access-xvv5r\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662845 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6c632190-70af-4fb2-97f2-3ea2cddf0302-metallb-excludel2\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.662953 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54899288-3291-42c0-969e-f22dab071c51-cert\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: E0112 13:17:35.662721 4580 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.663027 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83e9a47a-80d6-4d28-ae2d-da27e069932f-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-pkj9q\" (UID: \"83e9a47a-80d6-4d28-ae2d-da27e069932f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:35 crc kubenswrapper[4580]: E0112 13:17:35.663009 4580 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 12 13:17:35 crc kubenswrapper[4580]: E0112 13:17:35.663131 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-metrics-certs podName:6c632190-70af-4fb2-97f2-3ea2cddf0302 nodeName:}" failed. No retries permitted until 2026-01-12 13:17:36.163083251 +0000 UTC m=+655.207301941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-metrics-certs") pod "speaker-qbcvq" (UID: "6c632190-70af-4fb2-97f2-3ea2cddf0302") : secret "speaker-certs-secret" not found Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.663176 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wwl\" (UniqueName: \"kubernetes.io/projected/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-kube-api-access-t7wwl\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: E0112 13:17:35.663195 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist podName:6c632190-70af-4fb2-97f2-3ea2cddf0302 nodeName:}" failed. No retries permitted until 2026-01-12 13:17:36.163157892 +0000 UTC m=+655.207376583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist") pod "speaker-qbcvq" (UID: "6c632190-70af-4fb2-97f2-3ea2cddf0302") : secret "metallb-memberlist" not found Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.663242 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-conf\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.663386 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cth75\" (UniqueName: \"kubernetes.io/projected/83e9a47a-80d6-4d28-ae2d-da27e069932f-kube-api-access-cth75\") pod \"frr-k8s-webhook-server-7784b6fcf-pkj9q\" (UID: \"83e9a47a-80d6-4d28-ae2d-da27e069932f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.663611 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6c632190-70af-4fb2-97f2-3ea2cddf0302-metallb-excludel2\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.675499 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83e9a47a-80d6-4d28-ae2d-da27e069932f-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-pkj9q\" (UID: \"83e9a47a-80d6-4d28-ae2d-da27e069932f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.682664 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4zz\" (UniqueName: \"kubernetes.io/projected/6c632190-70af-4fb2-97f2-3ea2cddf0302-kube-api-access-vb4zz\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.682858 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cth75\" (UniqueName: \"kubernetes.io/projected/83e9a47a-80d6-4d28-ae2d-da27e069932f-kube-api-access-cth75\") pod \"frr-k8s-webhook-server-7784b6fcf-pkj9q\" (UID: \"83e9a47a-80d6-4d28-ae2d-da27e069932f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.764853 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wwl\" (UniqueName: \"kubernetes.io/projected/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-kube-api-access-t7wwl\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765144 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-conf\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765199 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-metrics-certs\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765226 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-metrics\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765245 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54899288-3291-42c0-969e-f22dab071c51-metrics-certs\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765274 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-sockets\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765293 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-reloader\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765329 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-startup\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765362 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvv5r\" (UniqueName: \"kubernetes.io/projected/54899288-3291-42c0-969e-f22dab071c51-kube-api-access-xvv5r\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765409 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54899288-3291-42c0-969e-f22dab071c51-cert\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.765498 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-conf\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.766160 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-reloader\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.766328 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-startup\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.766328 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-metrics\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.766543 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-frr-sockets\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.768511 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54899288-3291-42c0-969e-f22dab071c51-cert\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.775560 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-metrics-certs\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.780554 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54899288-3291-42c0-969e-f22dab071c51-metrics-certs\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.782575 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wwl\" (UniqueName: \"kubernetes.io/projected/68e356d2-e51a-494b-a2cc-2c8491e4d8c8-kube-api-access-t7wwl\") pod \"frr-k8s-2n6tl\" (UID: \"68e356d2-e51a-494b-a2cc-2c8491e4d8c8\") " pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.786654 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvv5r\" (UniqueName: \"kubernetes.io/projected/54899288-3291-42c0-969e-f22dab071c51-kube-api-access-xvv5r\") pod \"controller-5bddd4b946-rxqqw\" (UID: \"54899288-3291-42c0-969e-f22dab071c51\") " pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.787919 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:35 crc kubenswrapper[4580]: I0112 13:17:35.895678 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.051534 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-rxqqw"] Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.083526 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.176337 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q"] Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.185794 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-metrics-certs\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.185903 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:36 crc kubenswrapper[4580]: E0112 13:17:36.186215 4580 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 12 13:17:36 crc kubenswrapper[4580]: E0112 13:17:36.186301 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist podName:6c632190-70af-4fb2-97f2-3ea2cddf0302 nodeName:}" failed. No retries permitted until 2026-01-12 13:17:37.186270238 +0000 UTC m=+656.230488927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist") pod "speaker-qbcvq" (UID: "6c632190-70af-4fb2-97f2-3ea2cddf0302") : secret "metallb-memberlist" not found Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.189936 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-metrics-certs\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.623401 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerStarted","Data":"50cc2f449d4c886dedc715a6b14b0a0c24c363d5c7ae132bedfb3e554ed726b2"} Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.626079 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-rxqqw" event={"ID":"54899288-3291-42c0-969e-f22dab071c51","Type":"ContainerStarted","Data":"815de68c4b1ca0cfd74aa53bc58f6451e953433e133f7b3a0353270cf434bde4"} Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.626164 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-rxqqw" event={"ID":"54899288-3291-42c0-969e-f22dab071c51","Type":"ContainerStarted","Data":"18ca2da2576b91e8bb2ada774e874b0959e99e4de9d995b3d661db0643b6a142"} Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.626179 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-rxqqw" event={"ID":"54899288-3291-42c0-969e-f22dab071c51","Type":"ContainerStarted","Data":"86a2e72c31151d5b01c4fb52f3b3d7a95febf9229f241cb06d2d64cf7b65f11a"} Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.626290 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.627523 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" event={"ID":"83e9a47a-80d6-4d28-ae2d-da27e069932f","Type":"ContainerStarted","Data":"483249d44ff4517230726830896d0e7dba0f3060d7225dbc894a8c13f9b64e22"} Jan 12 13:17:36 crc kubenswrapper[4580]: I0112 13:17:36.644886 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-rxqqw" podStartSLOduration=1.644852233 podStartE2EDuration="1.644852233s" podCreationTimestamp="2026-01-12 13:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:17:36.643313441 +0000 UTC m=+655.687532131" watchObservedRunningTime="2026-01-12 13:17:36.644852233 +0000 UTC m=+655.689070924" Jan 12 13:17:37 crc kubenswrapper[4580]: I0112 13:17:37.203458 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:37 crc kubenswrapper[4580]: I0112 13:17:37.211171 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6c632190-70af-4fb2-97f2-3ea2cddf0302-memberlist\") pod \"speaker-qbcvq\" (UID: \"6c632190-70af-4fb2-97f2-3ea2cddf0302\") " pod="metallb-system/speaker-qbcvq" Jan 12 13:17:37 crc kubenswrapper[4580]: I0112 13:17:37.362127 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qbcvq" Jan 12 13:17:37 crc kubenswrapper[4580]: I0112 13:17:37.637730 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qbcvq" event={"ID":"6c632190-70af-4fb2-97f2-3ea2cddf0302","Type":"ContainerStarted","Data":"119f52a3a8d0bf738d4f9239753101e25a014e37aa77cfa71b14f542ed883994"} Jan 12 13:17:37 crc kubenswrapper[4580]: I0112 13:17:37.638009 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qbcvq" event={"ID":"6c632190-70af-4fb2-97f2-3ea2cddf0302","Type":"ContainerStarted","Data":"3578545670bb4e94193a487da55e190983c897d585c6ea6a41747b05d647bfd2"} Jan 12 13:17:38 crc kubenswrapper[4580]: I0112 13:17:38.644987 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qbcvq" event={"ID":"6c632190-70af-4fb2-97f2-3ea2cddf0302","Type":"ContainerStarted","Data":"3c25de384672463435968f4f098c587facb33e93a83e49de5b0ecd8204524292"} Jan 12 13:17:38 crc kubenswrapper[4580]: I0112 13:17:38.645380 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qbcvq" Jan 12 13:17:38 crc kubenswrapper[4580]: I0112 13:17:38.661673 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qbcvq" podStartSLOduration=3.661658599 podStartE2EDuration="3.661658599s" podCreationTimestamp="2026-01-12 13:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:17:38.659374285 +0000 UTC m=+657.703592975" watchObservedRunningTime="2026-01-12 13:17:38.661658599 +0000 UTC m=+657.705877289" Jan 12 13:17:42 crc kubenswrapper[4580]: I0112 13:17:42.672340 4580 generic.go:334] "Generic (PLEG): container finished" podID="68e356d2-e51a-494b-a2cc-2c8491e4d8c8" containerID="d0020c9012e49376e1a63f3db60e83891f136cba95db8c50b2ce4e5a77c130a2" exitCode=0 Jan 12 13:17:42 crc kubenswrapper[4580]: I0112 13:17:42.672464 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerDied","Data":"d0020c9012e49376e1a63f3db60e83891f136cba95db8c50b2ce4e5a77c130a2"} Jan 12 13:17:42 crc kubenswrapper[4580]: I0112 13:17:42.675309 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" event={"ID":"83e9a47a-80d6-4d28-ae2d-da27e069932f","Type":"ContainerStarted","Data":"d73d9b49ba4a1a6b1e99d7844a2ae8ab4db730353d54cef8f52138b4fafb68a7"} Jan 12 13:17:42 crc kubenswrapper[4580]: I0112 13:17:42.675477 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:42 crc kubenswrapper[4580]: I0112 13:17:42.714375 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" podStartSLOduration=1.551682829 podStartE2EDuration="7.714346787s" podCreationTimestamp="2026-01-12 13:17:35 +0000 UTC" firstStartedPulling="2026-01-12 13:17:36.190209272 +0000 UTC m=+655.234427963" lastFinishedPulling="2026-01-12 13:17:42.35287323 +0000 UTC m=+661.397091921" observedRunningTime="2026-01-12 13:17:42.713122326 +0000 UTC m=+661.757341016" watchObservedRunningTime="2026-01-12 13:17:42.714346787 +0000 UTC m=+661.758565477" Jan 12 13:17:43 crc kubenswrapper[4580]: I0112 13:17:43.681287 4580 generic.go:334] "Generic (PLEG): container finished" podID="68e356d2-e51a-494b-a2cc-2c8491e4d8c8" containerID="46ff9bdaa94cb1af0f27e93a58a897bfd756a4db32e5a2b613d2f850ab1e7bd0" exitCode=0 Jan 12 13:17:43 crc kubenswrapper[4580]: I0112 13:17:43.681349 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerDied","Data":"46ff9bdaa94cb1af0f27e93a58a897bfd756a4db32e5a2b613d2f850ab1e7bd0"} Jan 12 13:17:44 crc kubenswrapper[4580]: I0112 13:17:44.694840 4580 generic.go:334] "Generic (PLEG): container finished" podID="68e356d2-e51a-494b-a2cc-2c8491e4d8c8" containerID="47b3b8cf001f739fba5eac1f9f118d04f5fb6de2ba6eaf7940a33336129e8dc9" exitCode=0 Jan 12 13:17:44 crc kubenswrapper[4580]: I0112 13:17:44.695197 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerDied","Data":"47b3b8cf001f739fba5eac1f9f118d04f5fb6de2ba6eaf7940a33336129e8dc9"} Jan 12 13:17:45 crc kubenswrapper[4580]: I0112 13:17:45.705407 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerStarted","Data":"e8326f4bf7beb2ae347e590b4e3b0587c93d6715e4eae20db2de26a2096624fa"} Jan 12 13:17:45 crc kubenswrapper[4580]: I0112 13:17:45.705460 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerStarted","Data":"4ff3273e46365844d6ba00a3565455649623c30579e2195655d66b5b3cc05f7a"} Jan 12 13:17:45 crc kubenswrapper[4580]: I0112 13:17:45.705471 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerStarted","Data":"2e2f521e714a7122d0479aa71a718153bd0ff4aaf8e1b4e9a8e8edbe8a6a7c18"} Jan 12 13:17:45 crc kubenswrapper[4580]: I0112 13:17:45.705481 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerStarted","Data":"ca4600b9af8b531e23930f2f8779a96a6e6c39dde711e6f8266af94af06f33a5"} Jan 12 13:17:45 crc kubenswrapper[4580]: I0112 13:17:45.705490 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerStarted","Data":"90ec4dc0fc9dff3d8667544efe42d7bbfedeb39ef02c2fae8da6898f2ad779cc"} Jan 12 13:17:45 crc kubenswrapper[4580]: I0112 13:17:45.705498 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2n6tl" event={"ID":"68e356d2-e51a-494b-a2cc-2c8491e4d8c8","Type":"ContainerStarted","Data":"6fbee0c858a80d026b574ec60e97b973b8e4335da6551227be8467ae40ed79b8"} Jan 12 13:17:45 crc kubenswrapper[4580]: I0112 13:17:45.705606 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:45 crc kubenswrapper[4580]: I0112 13:17:45.724083 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2n6tl" podStartSLOduration=4.554100781 podStartE2EDuration="10.724065822s" podCreationTimestamp="2026-01-12 13:17:35 +0000 UTC" firstStartedPulling="2026-01-12 13:17:36.236315119 +0000 UTC m=+655.280533810" lastFinishedPulling="2026-01-12 13:17:42.406280161 +0000 UTC m=+661.450498851" observedRunningTime="2026-01-12 13:17:45.720298991 +0000 UTC m=+664.764517681" watchObservedRunningTime="2026-01-12 13:17:45.724065822 +0000 UTC m=+664.768284512" Jan 12 13:17:46 crc kubenswrapper[4580]: I0112 13:17:46.084949 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:46 crc kubenswrapper[4580]: I0112 13:17:46.127189 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:47 crc kubenswrapper[4580]: I0112 13:17:47.365441 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qbcvq" Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.540536 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f9ht7"] Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.541407 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f9ht7" Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.542702 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nrhvf" Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.543343 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.543442 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.549535 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f9ht7"] Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.590972 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcn5z\" (UniqueName: \"kubernetes.io/projected/f41a0cc3-5665-4c39-862f-0705b9e0136d-kube-api-access-xcn5z\") pod \"openstack-operator-index-f9ht7\" (UID: \"f41a0cc3-5665-4c39-862f-0705b9e0136d\") " pod="openstack-operators/openstack-operator-index-f9ht7" Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.692544 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcn5z\" (UniqueName: \"kubernetes.io/projected/f41a0cc3-5665-4c39-862f-0705b9e0136d-kube-api-access-xcn5z\") pod \"openstack-operator-index-f9ht7\" (UID: \"f41a0cc3-5665-4c39-862f-0705b9e0136d\") " pod="openstack-operators/openstack-operator-index-f9ht7" Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.712522 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcn5z\" (UniqueName: \"kubernetes.io/projected/f41a0cc3-5665-4c39-862f-0705b9e0136d-kube-api-access-xcn5z\") pod \"openstack-operator-index-f9ht7\" (UID: \"f41a0cc3-5665-4c39-862f-0705b9e0136d\") " pod="openstack-operators/openstack-operator-index-f9ht7" Jan 12 13:17:49 crc kubenswrapper[4580]: I0112 13:17:49.859075 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f9ht7" Jan 12 13:17:50 crc kubenswrapper[4580]: I0112 13:17:50.228234 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f9ht7"] Jan 12 13:17:50 crc kubenswrapper[4580]: I0112 13:17:50.750061 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f9ht7" event={"ID":"f41a0cc3-5665-4c39-862f-0705b9e0136d","Type":"ContainerStarted","Data":"fa8d09896576ecba0a9df8b15c5c4a7925ebcae7a9b55cbbc56844976ce02422"} Jan 12 13:17:52 crc kubenswrapper[4580]: I0112 13:17:52.762925 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f9ht7" event={"ID":"f41a0cc3-5665-4c39-862f-0705b9e0136d","Type":"ContainerStarted","Data":"df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04"} Jan 12 13:17:52 crc kubenswrapper[4580]: I0112 13:17:52.778991 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f9ht7" podStartSLOduration=2.314051001 podStartE2EDuration="3.778963768s" podCreationTimestamp="2026-01-12 13:17:49 +0000 UTC" firstStartedPulling="2026-01-12 13:17:50.233751687 +0000 UTC m=+669.277970378" lastFinishedPulling="2026-01-12 13:17:51.698664455 +0000 UTC m=+670.742883145" observedRunningTime="2026-01-12 13:17:52.774404689 +0000 UTC m=+671.818623379" watchObservedRunningTime="2026-01-12 13:17:52.778963768 +0000 UTC m=+671.823182459" Jan 12 13:17:52 crc kubenswrapper[4580]: I0112 13:17:52.927429 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-f9ht7"] Jan 12 13:17:53 crc kubenswrapper[4580]: I0112 13:17:53.532841 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-248h5"] Jan 12 13:17:53 crc kubenswrapper[4580]: I0112 13:17:53.534337 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:17:53 crc kubenswrapper[4580]: I0112 13:17:53.542285 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-248h5"] Jan 12 13:17:53 crc kubenswrapper[4580]: I0112 13:17:53.548317 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-png5t\" (UniqueName: \"kubernetes.io/projected/c0f0a657-34a2-4619-b992-64ab017e6ecb-kube-api-access-png5t\") pod \"openstack-operator-index-248h5\" (UID: \"c0f0a657-34a2-4619-b992-64ab017e6ecb\") " pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:17:53 crc kubenswrapper[4580]: I0112 13:17:53.649817 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-png5t\" (UniqueName: \"kubernetes.io/projected/c0f0a657-34a2-4619-b992-64ab017e6ecb-kube-api-access-png5t\") pod \"openstack-operator-index-248h5\" (UID: \"c0f0a657-34a2-4619-b992-64ab017e6ecb\") " pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:17:53 crc kubenswrapper[4580]: I0112 13:17:53.670063 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-png5t\" (UniqueName: \"kubernetes.io/projected/c0f0a657-34a2-4619-b992-64ab017e6ecb-kube-api-access-png5t\") pod \"openstack-operator-index-248h5\" (UID: \"c0f0a657-34a2-4619-b992-64ab017e6ecb\") " pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:17:53 crc kubenswrapper[4580]: I0112 13:17:53.854507 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:17:54 crc kubenswrapper[4580]: I0112 13:17:54.294864 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-248h5"] Jan 12 13:17:54 crc kubenswrapper[4580]: I0112 13:17:54.778465 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-248h5" event={"ID":"c0f0a657-34a2-4619-b992-64ab017e6ecb","Type":"ContainerStarted","Data":"dcb6df3774ac3d09b87b463d4cc7698a0404e552b91a768f47b5b22d2782da9b"} Jan 12 13:17:54 crc kubenswrapper[4580]: I0112 13:17:54.778604 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-f9ht7" podUID="f41a0cc3-5665-4c39-862f-0705b9e0136d" containerName="registry-server" containerID="cri-o://df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04" gracePeriod=2 Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.095764 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f9ht7" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.270516 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcn5z\" (UniqueName: \"kubernetes.io/projected/f41a0cc3-5665-4c39-862f-0705b9e0136d-kube-api-access-xcn5z\") pod \"f41a0cc3-5665-4c39-862f-0705b9e0136d\" (UID: \"f41a0cc3-5665-4c39-862f-0705b9e0136d\") " Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.276378 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41a0cc3-5665-4c39-862f-0705b9e0136d-kube-api-access-xcn5z" (OuterVolumeSpecName: "kube-api-access-xcn5z") pod "f41a0cc3-5665-4c39-862f-0705b9e0136d" (UID: "f41a0cc3-5665-4c39-862f-0705b9e0136d"). InnerVolumeSpecName "kube-api-access-xcn5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.372779 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcn5z\" (UniqueName: \"kubernetes.io/projected/f41a0cc3-5665-4c39-862f-0705b9e0136d-kube-api-access-xcn5z\") on node \"crc\" DevicePath \"\"" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.794295 4580 generic.go:334] "Generic (PLEG): container finished" podID="f41a0cc3-5665-4c39-862f-0705b9e0136d" containerID="df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04" exitCode=0 Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.794370 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f9ht7" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.794389 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f9ht7" event={"ID":"f41a0cc3-5665-4c39-862f-0705b9e0136d","Type":"ContainerDied","Data":"df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04"} Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.795059 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f9ht7" event={"ID":"f41a0cc3-5665-4c39-862f-0705b9e0136d","Type":"ContainerDied","Data":"fa8d09896576ecba0a9df8b15c5c4a7925ebcae7a9b55cbbc56844976ce02422"} Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.795193 4580 scope.go:117] "RemoveContainer" containerID="df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.796665 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-pkj9q" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.797908 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-248h5" event={"ID":"c0f0a657-34a2-4619-b992-64ab017e6ecb","Type":"ContainerStarted","Data":"5d074a729b18ead16a5f0cd100f82e7f70a932f93b3019cab4d0e1842e1f8f3d"} Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.814301 4580 scope.go:117] "RemoveContainer" containerID="df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04" Jan 12 13:17:55 crc kubenswrapper[4580]: E0112 13:17:55.814799 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04\": container with ID starting with df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04 not found: ID does not exist" containerID="df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.814856 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04"} err="failed to get container status \"df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04\": rpc error: code = NotFound desc = could not find container \"df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04\": container with ID starting with df9949ab7fac70632b6121f127f86456070726068b5aa0e0c47ef69fbdeb6d04 not found: ID does not exist" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.815390 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-f9ht7"] Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.819632 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-f9ht7"] Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.839992 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-248h5" podStartSLOduration=2.318684949 podStartE2EDuration="2.839968507s" podCreationTimestamp="2026-01-12 13:17:53 +0000 UTC" firstStartedPulling="2026-01-12 13:17:54.30978612 +0000 UTC m=+673.354004810" lastFinishedPulling="2026-01-12 13:17:54.831069678 +0000 UTC m=+673.875288368" observedRunningTime="2026-01-12 13:17:55.838356628 +0000 UTC m=+674.882575318" watchObservedRunningTime="2026-01-12 13:17:55.839968507 +0000 UTC m=+674.884187198" Jan 12 13:17:55 crc kubenswrapper[4580]: I0112 13:17:55.898609 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-rxqqw" Jan 12 13:17:56 crc kubenswrapper[4580]: I0112 13:17:56.086833 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2n6tl" Jan 12 13:17:57 crc kubenswrapper[4580]: I0112 13:17:57.287750 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41a0cc3-5665-4c39-862f-0705b9e0136d" path="/var/lib/kubelet/pods/f41a0cc3-5665-4c39-862f-0705b9e0136d/volumes" Jan 12 13:18:03 crc kubenswrapper[4580]: I0112 13:18:03.855355 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:18:03 crc kubenswrapper[4580]: I0112 13:18:03.856076 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:18:03 crc kubenswrapper[4580]: I0112 13:18:03.883127 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:18:04 crc kubenswrapper[4580]: I0112 13:18:04.884833 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-248h5" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.567773 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns"] Jan 12 13:18:10 crc kubenswrapper[4580]: E0112 13:18:10.568519 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41a0cc3-5665-4c39-862f-0705b9e0136d" containerName="registry-server" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.568535 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41a0cc3-5665-4c39-862f-0705b9e0136d" containerName="registry-server" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.568659 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41a0cc3-5665-4c39-862f-0705b9e0136d" containerName="registry-server" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.569566 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.571140 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hrqf8" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.576081 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns"] Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.677789 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-util\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.677877 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7fd\" (UniqueName: \"kubernetes.io/projected/d27f45e7-f85b-4b23-b849-8e1778cfe3df-kube-api-access-ch7fd\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.677923 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-bundle\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.778967 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7fd\" (UniqueName: \"kubernetes.io/projected/d27f45e7-f85b-4b23-b849-8e1778cfe3df-kube-api-access-ch7fd\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.779054 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-bundle\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.779131 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-util\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.779650 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-bundle\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.779694 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-util\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.797426 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7fd\" (UniqueName: \"kubernetes.io/projected/d27f45e7-f85b-4b23-b849-8e1778cfe3df-kube-api-access-ch7fd\") pod \"e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:10 crc kubenswrapper[4580]: I0112 13:18:10.884821 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:11 crc kubenswrapper[4580]: I0112 13:18:11.251516 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns"] Jan 12 13:18:11 crc kubenswrapper[4580]: W0112 13:18:11.255690 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd27f45e7_f85b_4b23_b849_8e1778cfe3df.slice/crio-cce8e9b51c66c2d8eac52afbb0a089af82150fa0dd54c9db444ef4b7e79e24ff WatchSource:0}: Error finding container cce8e9b51c66c2d8eac52afbb0a089af82150fa0dd54c9db444ef4b7e79e24ff: Status 404 returned error can't find the container with id cce8e9b51c66c2d8eac52afbb0a089af82150fa0dd54c9db444ef4b7e79e24ff Jan 12 13:18:11 crc kubenswrapper[4580]: I0112 13:18:11.906665 4580 generic.go:334] "Generic (PLEG): container finished" podID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerID="d68dd96316562b8cff43ac0e24add2b8653448b6c962b9d67f510c7cec75a299" exitCode=0 Jan 12 13:18:11 crc kubenswrapper[4580]: I0112 13:18:11.906724 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" event={"ID":"d27f45e7-f85b-4b23-b849-8e1778cfe3df","Type":"ContainerDied","Data":"d68dd96316562b8cff43ac0e24add2b8653448b6c962b9d67f510c7cec75a299"} Jan 12 13:18:11 crc kubenswrapper[4580]: I0112 13:18:11.906770 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" event={"ID":"d27f45e7-f85b-4b23-b849-8e1778cfe3df","Type":"ContainerStarted","Data":"cce8e9b51c66c2d8eac52afbb0a089af82150fa0dd54c9db444ef4b7e79e24ff"} Jan 12 13:18:13 crc kubenswrapper[4580]: I0112 13:18:13.919780 4580 generic.go:334] "Generic (PLEG): container finished" podID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerID="9c4a3e35467808e3a9bf130e38e45c664083392a3b3a03bb85be0bfcb597a5a1" exitCode=0 Jan 12 13:18:13 crc kubenswrapper[4580]: I0112 13:18:13.919996 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" event={"ID":"d27f45e7-f85b-4b23-b849-8e1778cfe3df","Type":"ContainerDied","Data":"9c4a3e35467808e3a9bf130e38e45c664083392a3b3a03bb85be0bfcb597a5a1"} Jan 12 13:18:14 crc kubenswrapper[4580]: I0112 13:18:14.931873 4580 generic.go:334] "Generic (PLEG): container finished" podID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerID="82be94ff562979987fa783c7b570899e871306e016dc98de6999d47288661730" exitCode=0 Jan 12 13:18:14 crc kubenswrapper[4580]: I0112 13:18:14.931979 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" event={"ID":"d27f45e7-f85b-4b23-b849-8e1778cfe3df","Type":"ContainerDied","Data":"82be94ff562979987fa783c7b570899e871306e016dc98de6999d47288661730"} Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.145126 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.155562 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-bundle\") pod \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.155697 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-util\") pod \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.155754 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch7fd\" (UniqueName: \"kubernetes.io/projected/d27f45e7-f85b-4b23-b849-8e1778cfe3df-kube-api-access-ch7fd\") pod \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\" (UID: \"d27f45e7-f85b-4b23-b849-8e1778cfe3df\") " Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.156377 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-bundle" (OuterVolumeSpecName: "bundle") pod "d27f45e7-f85b-4b23-b849-8e1778cfe3df" (UID: "d27f45e7-f85b-4b23-b849-8e1778cfe3df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.162842 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27f45e7-f85b-4b23-b849-8e1778cfe3df-kube-api-access-ch7fd" (OuterVolumeSpecName: "kube-api-access-ch7fd") pod "d27f45e7-f85b-4b23-b849-8e1778cfe3df" (UID: "d27f45e7-f85b-4b23-b849-8e1778cfe3df"). InnerVolumeSpecName "kube-api-access-ch7fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.169819 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-util" (OuterVolumeSpecName: "util") pod "d27f45e7-f85b-4b23-b849-8e1778cfe3df" (UID: "d27f45e7-f85b-4b23-b849-8e1778cfe3df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.257525 4580 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.257556 4580 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e7-f85b-4b23-b849-8e1778cfe3df-util\") on node \"crc\" DevicePath \"\"" Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.257682 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch7fd\" (UniqueName: \"kubernetes.io/projected/d27f45e7-f85b-4b23-b849-8e1778cfe3df-kube-api-access-ch7fd\") on node \"crc\" DevicePath \"\"" Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.945291 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" event={"ID":"d27f45e7-f85b-4b23-b849-8e1778cfe3df","Type":"ContainerDied","Data":"cce8e9b51c66c2d8eac52afbb0a089af82150fa0dd54c9db444ef4b7e79e24ff"} Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.945329 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce8e9b51c66c2d8eac52afbb0a089af82150fa0dd54c9db444ef4b7e79e24ff" Jan 12 13:18:16 crc kubenswrapper[4580]: I0112 13:18:16.945385 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.534194 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t"] Jan 12 13:18:22 crc kubenswrapper[4580]: E0112 13:18:22.534993 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerName="pull" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.535008 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerName="pull" Jan 12 13:18:22 crc kubenswrapper[4580]: E0112 13:18:22.535033 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerName="extract" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.535039 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerName="extract" Jan 12 13:18:22 crc kubenswrapper[4580]: E0112 13:18:22.535053 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerName="util" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.535059 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerName="util" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.535184 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27f45e7-f85b-4b23-b849-8e1778cfe3df" containerName="extract" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.535610 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.537217 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-lrqkv" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.547943 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t"] Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.635280 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxwld\" (UniqueName: \"kubernetes.io/projected/bca66d95-723d-4cd6-bc4c-1a0c564606f3-kube-api-access-xxwld\") pod \"openstack-operator-controller-operator-5bf8b477cb-hwd8t\" (UID: \"bca66d95-723d-4cd6-bc4c-1a0c564606f3\") " pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.737055 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxwld\" (UniqueName: \"kubernetes.io/projected/bca66d95-723d-4cd6-bc4c-1a0c564606f3-kube-api-access-xxwld\") pod \"openstack-operator-controller-operator-5bf8b477cb-hwd8t\" (UID: \"bca66d95-723d-4cd6-bc4c-1a0c564606f3\") " pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.755902 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxwld\" (UniqueName: \"kubernetes.io/projected/bca66d95-723d-4cd6-bc4c-1a0c564606f3-kube-api-access-xxwld\") pod \"openstack-operator-controller-operator-5bf8b477cb-hwd8t\" (UID: \"bca66d95-723d-4cd6-bc4c-1a0c564606f3\") " pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" Jan 12 13:18:22 crc kubenswrapper[4580]: I0112 13:18:22.851121 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" Jan 12 13:18:23 crc kubenswrapper[4580]: I0112 13:18:23.222416 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t"] Jan 12 13:18:23 crc kubenswrapper[4580]: I0112 13:18:23.981551 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" event={"ID":"bca66d95-723d-4cd6-bc4c-1a0c564606f3","Type":"ContainerStarted","Data":"6bdc7f7f240cfca85a99b79ec535b843675c18bb288bc8dc63212ca18d758cb4"} Jan 12 13:18:30 crc kubenswrapper[4580]: I0112 13:18:30.019597 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" event={"ID":"bca66d95-723d-4cd6-bc4c-1a0c564606f3","Type":"ContainerStarted","Data":"3fba561216abe6c7ce72718983ff442749ca122a8710f7a4d15a0f8e7b476f43"} Jan 12 13:18:30 crc kubenswrapper[4580]: I0112 13:18:30.020120 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" Jan 12 13:18:30 crc kubenswrapper[4580]: I0112 13:18:30.044551 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" podStartSLOduration=1.491257394 podStartE2EDuration="8.044533622s" podCreationTimestamp="2026-01-12 13:18:22 +0000 UTC" firstStartedPulling="2026-01-12 13:18:23.230324658 +0000 UTC m=+702.274543348" lastFinishedPulling="2026-01-12 13:18:29.783600886 +0000 UTC m=+708.827819576" observedRunningTime="2026-01-12 13:18:30.041535867 +0000 UTC m=+709.085754557" watchObservedRunningTime="2026-01-12 13:18:30.044533622 +0000 UTC m=+709.088752312" Jan 12 13:18:42 crc kubenswrapper[4580]: I0112 13:18:42.853908 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5bf8b477cb-hwd8t" Jan 12 13:19:09 crc kubenswrapper[4580]: I0112 13:19:09.494783 4580 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.048559 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.049464 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.056199 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-tzbck" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.056805 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.057753 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.061317 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k5ltt" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.066175 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.072167 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.076293 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-plhvp"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.082762 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.085423 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.089987 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.091187 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cq2r6" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.095857 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p86r4" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.136727 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-plhvp"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.149872 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6b4\" (UniqueName: \"kubernetes.io/projected/cbfff7ce-c184-4dee-94d5-c6ee41fc2b75-kube-api-access-5s6b4\") pod \"designate-operator-controller-manager-9f958b845-plhvp\" (UID: \"cbfff7ce-c184-4dee-94d5-c6ee41fc2b75\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.149922 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xh2q\" (UniqueName: \"kubernetes.io/projected/63a3c1f8-84b5-4648-9a74-bc1e980d5a57-kube-api-access-8xh2q\") pod \"cinder-operator-controller-manager-9b68f5989-4b7c9\" (UID: \"63a3c1f8-84b5-4648-9a74-bc1e980d5a57\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.150093 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvw8\" (UniqueName: \"kubernetes.io/projected/7ed21cbb-5825-4538-bfb6-74f895189d83-kube-api-access-kwvw8\") pod \"barbican-operator-controller-manager-6c697f55f8-69mz9\" (UID: \"7ed21cbb-5825-4538-bfb6-74f895189d83\") " pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.150163 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wxf\" (UniqueName: \"kubernetes.io/projected/b3716289-2aa2-4e39-b8db-7980564c976e-kube-api-access-d7wxf\") pod \"glance-operator-controller-manager-75b858dccc-nr2g4\" (UID: \"b3716289-2aa2-4e39-b8db-7980564c976e\") " pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.157538 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.168046 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.169232 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.171191 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.172140 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.172534 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-w9g4z" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.176159 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hvp84" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.177784 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.184998 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.188338 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.189291 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.191715 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kcmjq" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.191872 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.206331 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.212269 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.213181 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.216252 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qjfx8" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.239402 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.244735 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.245517 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.249670 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4gjff" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251339 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwvw8\" (UniqueName: \"kubernetes.io/projected/7ed21cbb-5825-4538-bfb6-74f895189d83-kube-api-access-kwvw8\") pod \"barbican-operator-controller-manager-6c697f55f8-69mz9\" (UID: \"7ed21cbb-5825-4538-bfb6-74f895189d83\") " pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251390 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wxf\" (UniqueName: \"kubernetes.io/projected/b3716289-2aa2-4e39-b8db-7980564c976e-kube-api-access-d7wxf\") pod \"glance-operator-controller-manager-75b858dccc-nr2g4\" (UID: \"b3716289-2aa2-4e39-b8db-7980564c976e\") " pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251424 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47j45\" (UniqueName: \"kubernetes.io/projected/0286f995-6c82-4417-8a67-91b5e261a211-kube-api-access-47j45\") pod \"ironic-operator-controller-manager-78757b4889-plnl2\" (UID: \"0286f995-6c82-4417-8a67-91b5e261a211\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251458 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85n4n\" (UniqueName: \"kubernetes.io/projected/218c7ab4-85b0-4609-87e6-35d51283e5e0-kube-api-access-85n4n\") pod \"horizon-operator-controller-manager-75cb9467dc-r22fp\" (UID: \"218c7ab4-85b0-4609-87e6-35d51283e5e0\") " pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251485 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251552 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6b4\" (UniqueName: \"kubernetes.io/projected/cbfff7ce-c184-4dee-94d5-c6ee41fc2b75-kube-api-access-5s6b4\") pod \"designate-operator-controller-manager-9f958b845-plhvp\" (UID: \"cbfff7ce-c184-4dee-94d5-c6ee41fc2b75\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251575 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmc7\" (UniqueName: \"kubernetes.io/projected/8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01-kube-api-access-tsmc7\") pod \"heat-operator-controller-manager-6cd7bcb4bf-nvbml\" (UID: \"8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01\") " pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251593 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqmj\" (UniqueName: \"kubernetes.io/projected/1135f51b-1f4e-4866-bb7d-728be53f5be7-kube-api-access-qpqmj\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.251613 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xh2q\" (UniqueName: \"kubernetes.io/projected/63a3c1f8-84b5-4648-9a74-bc1e980d5a57-kube-api-access-8xh2q\") pod \"cinder-operator-controller-manager-9b68f5989-4b7c9\" (UID: \"63a3c1f8-84b5-4648-9a74-bc1e980d5a57\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.264361 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.270501 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.272190 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.275051 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zcwp9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.281468 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.286344 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xh2q\" (UniqueName: \"kubernetes.io/projected/63a3c1f8-84b5-4648-9a74-bc1e980d5a57-kube-api-access-8xh2q\") pod \"cinder-operator-controller-manager-9b68f5989-4b7c9\" (UID: \"63a3c1f8-84b5-4648-9a74-bc1e980d5a57\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.286822 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wxf\" (UniqueName: \"kubernetes.io/projected/b3716289-2aa2-4e39-b8db-7980564c976e-kube-api-access-d7wxf\") pod \"glance-operator-controller-manager-75b858dccc-nr2g4\" (UID: \"b3716289-2aa2-4e39-b8db-7980564c976e\") " pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.289166 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.292195 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qs5rp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.298329 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.300327 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwvw8\" (UniqueName: \"kubernetes.io/projected/7ed21cbb-5825-4538-bfb6-74f895189d83-kube-api-access-kwvw8\") pod \"barbican-operator-controller-manager-6c697f55f8-69mz9\" (UID: \"7ed21cbb-5825-4538-bfb6-74f895189d83\") " pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.309312 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.310212 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.310317 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.310679 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s6b4\" (UniqueName: \"kubernetes.io/projected/cbfff7ce-c184-4dee-94d5-c6ee41fc2b75-kube-api-access-5s6b4\") pod \"designate-operator-controller-manager-9f958b845-plhvp\" (UID: \"cbfff7ce-c184-4dee-94d5-c6ee41fc2b75\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.317245 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.317603 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nxw2d" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.317901 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.319770 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sl6z5" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.322394 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.327310 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.328084 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.331719 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vc2xs" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.337795 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352532 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4k8\" (UniqueName: \"kubernetes.io/projected/87188751-ba97-4f25-ba2c-70514594cb4a-kube-api-access-bm4k8\") pod \"mariadb-operator-controller-manager-c87fff755-8fxxj\" (UID: \"87188751-ba97-4f25-ba2c-70514594cb4a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352567 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l89r7\" (UniqueName: \"kubernetes.io/projected/bf14de2d-3f35-4c32-905c-0a133a4fbafe-kube-api-access-l89r7\") pod \"manila-operator-controller-manager-6684f856f9-w2xhg\" (UID: \"bf14de2d-3f35-4c32-905c-0a133a4fbafe\") " pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352600 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcdwq\" (UniqueName: \"kubernetes.io/projected/eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b-kube-api-access-fcdwq\") pod \"nova-operator-controller-manager-5977959f9c-sgg8q\" (UID: \"eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b\") " pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352632 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbcr\" (UniqueName: \"kubernetes.io/projected/2d0f98f6-67ec-4253-a344-8aa185679126-kube-api-access-rkbcr\") pod \"neutron-operator-controller-manager-cb4666565-ckfhs\" (UID: \"2d0f98f6-67ec-4253-a344-8aa185679126\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352656 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47j45\" (UniqueName: \"kubernetes.io/projected/0286f995-6c82-4417-8a67-91b5e261a211-kube-api-access-47j45\") pod \"ironic-operator-controller-manager-78757b4889-plnl2\" (UID: \"0286f995-6c82-4417-8a67-91b5e261a211\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352682 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85n4n\" (UniqueName: \"kubernetes.io/projected/218c7ab4-85b0-4609-87e6-35d51283e5e0-kube-api-access-85n4n\") pod \"horizon-operator-controller-manager-75cb9467dc-r22fp\" (UID: \"218c7ab4-85b0-4609-87e6-35d51283e5e0\") " pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352736 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352766 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55lmg\" (UniqueName: \"kubernetes.io/projected/726a74db-a499-4c38-8258-b711bc0dc30b-kube-api-access-55lmg\") pod \"keystone-operator-controller-manager-767fdc4f47-fckr8\" (UID: \"726a74db-a499-4c38-8258-b711bc0dc30b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352816 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsmc7\" (UniqueName: \"kubernetes.io/projected/8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01-kube-api-access-tsmc7\") pod \"heat-operator-controller-manager-6cd7bcb4bf-nvbml\" (UID: \"8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01\") " pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352832 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnvt\" (UniqueName: \"kubernetes.io/projected/7209eb4d-53dc-4c30-9b80-8863acbea5a6-kube-api-access-ccnvt\") pod \"octavia-operator-controller-manager-7fc9b76cf6-nbzhm\" (UID: \"7209eb4d-53dc-4c30-9b80-8863acbea5a6\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.352858 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpqmj\" (UniqueName: \"kubernetes.io/projected/1135f51b-1f4e-4866-bb7d-728be53f5be7-kube-api-access-qpqmj\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.353944 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm"] Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.355639 4580 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.355684 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert podName:1135f51b-1f4e-4866-bb7d-728be53f5be7 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:10.855667356 +0000 UTC m=+749.899886047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert") pod "infra-operator-controller-manager-77c48c7859-2sg8z" (UID: "1135f51b-1f4e-4866-bb7d-728be53f5be7") : secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.361392 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.362141 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.367464 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.368267 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.368627 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-g5vzf" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.370495 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.372342 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85n4n\" (UniqueName: \"kubernetes.io/projected/218c7ab4-85b0-4609-87e6-35d51283e5e0-kube-api-access-85n4n\") pod \"horizon-operator-controller-manager-75cb9467dc-r22fp\" (UID: \"218c7ab4-85b0-4609-87e6-35d51283e5e0\") " pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.372774 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.373031 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsmc7\" (UniqueName: \"kubernetes.io/projected/8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01-kube-api-access-tsmc7\") pod \"heat-operator-controller-manager-6cd7bcb4bf-nvbml\" (UID: \"8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01\") " pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.373096 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpqmj\" (UniqueName: \"kubernetes.io/projected/1135f51b-1f4e-4866-bb7d-728be53f5be7-kube-api-access-qpqmj\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.373387 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d8c85" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.375008 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47j45\" (UniqueName: \"kubernetes.io/projected/0286f995-6c82-4417-8a67-91b5e261a211-kube-api-access-47j45\") pod \"ironic-operator-controller-manager-78757b4889-plnl2\" (UID: \"0286f995-6c82-4417-8a67-91b5e261a211\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.397552 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.398740 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.408723 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.409967 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.411594 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-grv77" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.414956 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.419816 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.430953 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.434759 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.436948 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.437056 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.439371 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2kzzl" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.444486 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453714 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55lmg\" (UniqueName: \"kubernetes.io/projected/726a74db-a499-4c38-8258-b711bc0dc30b-kube-api-access-55lmg\") pod \"keystone-operator-controller-manager-767fdc4f47-fckr8\" (UID: \"726a74db-a499-4c38-8258-b711bc0dc30b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453765 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453814 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clvl7\" (UniqueName: \"kubernetes.io/projected/742c889f-d87d-4d61-82f8-2fa3ffc3d6b2-kube-api-access-clvl7\") pod \"ovn-operator-controller-manager-cf664874d-vznwd\" (UID: \"742c889f-d87d-4d61-82f8-2fa3ffc3d6b2\") " pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453846 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccnvt\" (UniqueName: \"kubernetes.io/projected/7209eb4d-53dc-4c30-9b80-8863acbea5a6-kube-api-access-ccnvt\") pod \"octavia-operator-controller-manager-7fc9b76cf6-nbzhm\" (UID: \"7209eb4d-53dc-4c30-9b80-8863acbea5a6\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453885 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm4k8\" (UniqueName: \"kubernetes.io/projected/87188751-ba97-4f25-ba2c-70514594cb4a-kube-api-access-bm4k8\") pod \"mariadb-operator-controller-manager-c87fff755-8fxxj\" (UID: \"87188751-ba97-4f25-ba2c-70514594cb4a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453906 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l89r7\" (UniqueName: \"kubernetes.io/projected/bf14de2d-3f35-4c32-905c-0a133a4fbafe-kube-api-access-l89r7\") pod \"manila-operator-controller-manager-6684f856f9-w2xhg\" (UID: \"bf14de2d-3f35-4c32-905c-0a133a4fbafe\") " pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453938 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcdwq\" (UniqueName: \"kubernetes.io/projected/eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b-kube-api-access-fcdwq\") pod \"nova-operator-controller-manager-5977959f9c-sgg8q\" (UID: \"eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b\") " pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453963 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkbcr\" (UniqueName: \"kubernetes.io/projected/2d0f98f6-67ec-4253-a344-8aa185679126-kube-api-access-rkbcr\") pod \"neutron-operator-controller-manager-cb4666565-ckfhs\" (UID: \"2d0f98f6-67ec-4253-a344-8aa185679126\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.453999 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqsv\" (UniqueName: \"kubernetes.io/projected/f50c1909-7ba3-4d92-9e4e-2cbd2602e340-kube-api-access-smqsv\") pod \"placement-operator-controller-manager-78c6bccb56-mggmh\" (UID: \"f50c1909-7ba3-4d92-9e4e-2cbd2602e340\") " pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.454022 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfkx\" (UniqueName: \"kubernetes.io/projected/ccb61890-3cf7-45aa-974c-693f0d14c14a-kube-api-access-zcfkx\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.454042 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w4rc\" (UniqueName: \"kubernetes.io/projected/6a4af572-980a-4c9b-8d01-df30e894dcda-kube-api-access-2w4rc\") pod \"swift-operator-controller-manager-6469d85bcb-smn7v\" (UID: \"6a4af572-980a-4c9b-8d01-df30e894dcda\") " pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.475833 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l89r7\" (UniqueName: \"kubernetes.io/projected/bf14de2d-3f35-4c32-905c-0a133a4fbafe-kube-api-access-l89r7\") pod \"manila-operator-controller-manager-6684f856f9-w2xhg\" (UID: \"bf14de2d-3f35-4c32-905c-0a133a4fbafe\") " pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.485229 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.486344 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.487359 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcdwq\" (UniqueName: \"kubernetes.io/projected/eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b-kube-api-access-fcdwq\") pod \"nova-operator-controller-manager-5977959f9c-sgg8q\" (UID: \"eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b\") " pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.488781 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55lmg\" (UniqueName: \"kubernetes.io/projected/726a74db-a499-4c38-8258-b711bc0dc30b-kube-api-access-55lmg\") pod \"keystone-operator-controller-manager-767fdc4f47-fckr8\" (UID: \"726a74db-a499-4c38-8258-b711bc0dc30b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.489253 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.489787 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm4k8\" (UniqueName: \"kubernetes.io/projected/87188751-ba97-4f25-ba2c-70514594cb4a-kube-api-access-bm4k8\") pod \"mariadb-operator-controller-manager-c87fff755-8fxxj\" (UID: \"87188751-ba97-4f25-ba2c-70514594cb4a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.490701 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nw6hd" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.491275 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccnvt\" (UniqueName: \"kubernetes.io/projected/7209eb4d-53dc-4c30-9b80-8863acbea5a6-kube-api-access-ccnvt\") pod \"octavia-operator-controller-manager-7fc9b76cf6-nbzhm\" (UID: \"7209eb4d-53dc-4c30-9b80-8863acbea5a6\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.491630 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkbcr\" (UniqueName: \"kubernetes.io/projected/2d0f98f6-67ec-4253-a344-8aa185679126-kube-api-access-rkbcr\") pod \"neutron-operator-controller-manager-cb4666565-ckfhs\" (UID: \"2d0f98f6-67ec-4253-a344-8aa185679126\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.501575 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.526985 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.540181 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.556114 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqsv\" (UniqueName: \"kubernetes.io/projected/f50c1909-7ba3-4d92-9e4e-2cbd2602e340-kube-api-access-smqsv\") pod \"placement-operator-controller-manager-78c6bccb56-mggmh\" (UID: \"f50c1909-7ba3-4d92-9e4e-2cbd2602e340\") " pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.556325 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll6rb\" (UniqueName: \"kubernetes.io/projected/ed127163-4a57-4b95-9dd7-4c856bd3d126-kube-api-access-ll6rb\") pod \"telemetry-operator-controller-manager-74bd5457c5-95bcj\" (UID: \"ed127163-4a57-4b95-9dd7-4c856bd3d126\") " pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.556415 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfkx\" (UniqueName: \"kubernetes.io/projected/ccb61890-3cf7-45aa-974c-693f0d14c14a-kube-api-access-zcfkx\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.556521 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w4rc\" (UniqueName: \"kubernetes.io/projected/6a4af572-980a-4c9b-8d01-df30e894dcda-kube-api-access-2w4rc\") pod \"swift-operator-controller-manager-6469d85bcb-smn7v\" (UID: \"6a4af572-980a-4c9b-8d01-df30e894dcda\") " pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.556665 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.556852 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clvl7\" (UniqueName: \"kubernetes.io/projected/742c889f-d87d-4d61-82f8-2fa3ffc3d6b2-kube-api-access-clvl7\") pod \"ovn-operator-controller-manager-cf664874d-vznwd\" (UID: \"742c889f-d87d-4d61-82f8-2fa3ffc3d6b2\") " pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.558248 4580 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.558395 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert podName:ccb61890-3cf7-45aa-974c-693f0d14c14a nodeName:}" failed. No retries permitted until 2026-01-12 13:19:11.058377706 +0000 UTC m=+750.102596397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert") pod "openstack-baremetal-operator-controller-manager-654686dcb9z5ths" (UID: "ccb61890-3cf7-45aa-974c-693f0d14c14a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.558634 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.559850 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.563523 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.564627 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jbspq" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.568710 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.575901 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clvl7\" (UniqueName: \"kubernetes.io/projected/742c889f-d87d-4d61-82f8-2fa3ffc3d6b2-kube-api-access-clvl7\") pod \"ovn-operator-controller-manager-cf664874d-vznwd\" (UID: \"742c889f-d87d-4d61-82f8-2fa3ffc3d6b2\") " pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.576543 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqsv\" (UniqueName: \"kubernetes.io/projected/f50c1909-7ba3-4d92-9e4e-2cbd2602e340-kube-api-access-smqsv\") pod \"placement-operator-controller-manager-78c6bccb56-mggmh\" (UID: \"f50c1909-7ba3-4d92-9e4e-2cbd2602e340\") " pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.579484 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfkx\" (UniqueName: \"kubernetes.io/projected/ccb61890-3cf7-45aa-974c-693f0d14c14a-kube-api-access-zcfkx\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.583421 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w4rc\" (UniqueName: \"kubernetes.io/projected/6a4af572-980a-4c9b-8d01-df30e894dcda-kube-api-access-2w4rc\") pod \"swift-operator-controller-manager-6469d85bcb-smn7v\" (UID: \"6a4af572-980a-4c9b-8d01-df30e894dcda\") " pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.647990 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.652276 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.655763 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.657598 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.658359 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.658805 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8mcq\" (UniqueName: \"kubernetes.io/projected/00ccc719-ee01-4ff4-934b-6e6fbadaa57c-kube-api-access-m8mcq\") pod \"test-operator-controller-manager-698b874cb5-4v5jb\" (UID: \"00ccc719-ee01-4ff4-934b-6e6fbadaa57c\") " pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.658853 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll6rb\" (UniqueName: \"kubernetes.io/projected/ed127163-4a57-4b95-9dd7-4c856bd3d126-kube-api-access-ll6rb\") pod \"telemetry-operator-controller-manager-74bd5457c5-95bcj\" (UID: \"ed127163-4a57-4b95-9dd7-4c856bd3d126\") " pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.661082 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-bkdj2" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.668778 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.682056 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.687657 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll6rb\" (UniqueName: \"kubernetes.io/projected/ed127163-4a57-4b95-9dd7-4c856bd3d126-kube-api-access-ll6rb\") pod \"telemetry-operator-controller-manager-74bd5457c5-95bcj\" (UID: \"ed127163-4a57-4b95-9dd7-4c856bd3d126\") " pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.717816 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.724676 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.754006 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.760965 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfdst\" (UniqueName: \"kubernetes.io/projected/56a7e345-fce1-44a5-aab4-8d82293bd5ee-kube-api-access-mfdst\") pod \"watcher-operator-controller-manager-64cd966744-2z5v7\" (UID: \"56a7e345-fce1-44a5-aab4-8d82293bd5ee\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.761068 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8mcq\" (UniqueName: \"kubernetes.io/projected/00ccc719-ee01-4ff4-934b-6e6fbadaa57c-kube-api-access-m8mcq\") pod \"test-operator-controller-manager-698b874cb5-4v5jb\" (UID: \"00ccc719-ee01-4ff4-934b-6e6fbadaa57c\") " pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.762064 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.800388 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8mcq\" (UniqueName: \"kubernetes.io/projected/00ccc719-ee01-4ff4-934b-6e6fbadaa57c-kube-api-access-m8mcq\") pod \"test-operator-controller-manager-698b874cb5-4v5jb\" (UID: \"00ccc719-ee01-4ff4-934b-6e6fbadaa57c\") " pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.813487 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.840177 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.841330 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.850334 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.852556 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.852811 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cgjlj" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.852949 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.864287 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcd7v\" (UniqueName: \"kubernetes.io/projected/87237fc1-15cd-4dd9-bcfe-5a334d366896-kube-api-access-dcd7v\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.864338 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfdst\" (UniqueName: \"kubernetes.io/projected/56a7e345-fce1-44a5-aab4-8d82293bd5ee-kube-api-access-mfdst\") pod \"watcher-operator-controller-manager-64cd966744-2z5v7\" (UID: \"56a7e345-fce1-44a5-aab4-8d82293bd5ee\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.864387 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.864402 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.864487 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.864592 4580 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.864631 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert podName:1135f51b-1f4e-4866-bb7d-728be53f5be7 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:11.864619219 +0000 UTC m=+750.908837910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert") pod "infra-operator-controller-manager-77c48c7859-2sg8z" (UID: "1135f51b-1f4e-4866-bb7d-728be53f5be7") : secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.885195 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9"] Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.890340 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.933821 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfdst\" (UniqueName: \"kubernetes.io/projected/56a7e345-fce1-44a5-aab4-8d82293bd5ee-kube-api-access-mfdst\") pod \"watcher-operator-controller-manager-64cd966744-2z5v7\" (UID: \"56a7e345-fce1-44a5-aab4-8d82293bd5ee\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" Jan 12 13:19:10 crc kubenswrapper[4580]: W0112 13:19:10.940125 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ed21cbb_5825_4538_bfb6_74f895189d83.slice/crio-1fecfbed106d64a7bebb28bbbd0c30173ed112e61dcca79b2bb9ffb956957f2c WatchSource:0}: Error finding container 1fecfbed106d64a7bebb28bbbd0c30173ed112e61dcca79b2bb9ffb956957f2c: Status 404 returned error can't find the container with id 1fecfbed106d64a7bebb28bbbd0c30173ed112e61dcca79b2bb9ffb956957f2c Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.967604 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcd7v\" (UniqueName: \"kubernetes.io/projected/87237fc1-15cd-4dd9-bcfe-5a334d366896-kube-api-access-dcd7v\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.967702 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.967725 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.967910 4580 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.967979 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:11.467959424 +0000 UTC m=+750.512178104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "metrics-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.968338 4580 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: E0112 13:19:10.968401 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:11.468376768 +0000 UTC m=+750.512595458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "webhook-server-cert" not found Jan 12 13:19:10 crc kubenswrapper[4580]: I0112 13:19:10.988263 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:10.995356 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:10.996090 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:10.999657 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.000525 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vz6wx" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.004234 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcd7v\" (UniqueName: \"kubernetes.io/projected/87237fc1-15cd-4dd9-bcfe-5a334d366896-kube-api-access-dcd7v\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.055916 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.069715 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89k4p\" (UniqueName: \"kubernetes.io/projected/520c9385-c952-45a9-b1ce-2ad913758239-kube-api-access-89k4p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-p4m8m\" (UID: \"520c9385-c952-45a9-b1ce-2ad913758239\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.069760 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.069864 4580 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.069902 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert podName:ccb61890-3cf7-45aa-974c-693f0d14c14a nodeName:}" failed. No retries permitted until 2026-01-12 13:19:12.069888667 +0000 UTC m=+751.114107357 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert") pod "openstack-baremetal-operator-controller-manager-654686dcb9z5ths" (UID: "ccb61890-3cf7-45aa-974c-693f0d14c14a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.074878 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-plhvp"] Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.089980 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbfff7ce_c184_4dee_94d5_c6ee41fc2b75.slice/crio-fa33657cbcf7e0932f47f72842c7b7dd9c6ed92d956ba9aba5117a78d2f00bdb WatchSource:0}: Error finding container fa33657cbcf7e0932f47f72842c7b7dd9c6ed92d956ba9aba5117a78d2f00bdb: Status 404 returned error can't find the container with id fa33657cbcf7e0932f47f72842c7b7dd9c6ed92d956ba9aba5117a78d2f00bdb Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.171430 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89k4p\" (UniqueName: \"kubernetes.io/projected/520c9385-c952-45a9-b1ce-2ad913758239-kube-api-access-89k4p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-p4m8m\" (UID: \"520c9385-c952-45a9-b1ce-2ad913758239\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.188555 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89k4p\" (UniqueName: \"kubernetes.io/projected/520c9385-c952-45a9-b1ce-2ad913758239-kube-api-access-89k4p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-p4m8m\" (UID: \"520c9385-c952-45a9-b1ce-2ad913758239\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.247574 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.259048 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4"] Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.289317 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3716289_2aa2_4e39_b8db_7980564c976e.slice/crio-14dcaacbd2789649575ca5158b4176fcf5e6c74350681a068a72a63275e5841b WatchSource:0}: Error finding container 14dcaacbd2789649575ca5158b4176fcf5e6c74350681a068a72a63275e5841b: Status 404 returned error can't find the container with id 14dcaacbd2789649575ca5158b4176fcf5e6c74350681a068a72a63275e5841b Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.296731 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" event={"ID":"63a3c1f8-84b5-4648-9a74-bc1e980d5a57","Type":"ContainerStarted","Data":"3a046aeab83409dc91f8a4cd28b41b284974dd350561dab9316e37912eb50963"} Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.296778 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" event={"ID":"cbfff7ce-c184-4dee-94d5-c6ee41fc2b75","Type":"ContainerStarted","Data":"fa33657cbcf7e0932f47f72842c7b7dd9c6ed92d956ba9aba5117a78d2f00bdb"} Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.301567 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" event={"ID":"7ed21cbb-5825-4538-bfb6-74f895189d83","Type":"ContainerStarted","Data":"1fecfbed106d64a7bebb28bbbd0c30173ed112e61dcca79b2bb9ffb956957f2c"} Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.391223 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.417576 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.438724 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.444860 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.477373 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.477435 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.477523 4580 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.477560 4580 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.477583 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:12.477567139 +0000 UTC m=+751.521785829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "metrics-server-cert" not found Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.477627 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:12.477614478 +0000 UTC m=+751.521833168 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "webhook-server-cert" not found Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.546997 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.551593 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q"] Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.553251 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf14de2d_3f35_4c32_905c_0a133a4fbafe.slice/crio-e259561d89feb07aba6624f074b5a5c1bcef70b238e2f7dfded89597c57177dd WatchSource:0}: Error finding container e259561d89feb07aba6624f074b5a5c1bcef70b238e2f7dfded89597c57177dd: Status 404 returned error can't find the container with id e259561d89feb07aba6624f074b5a5c1bcef70b238e2f7dfded89597c57177dd Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.553473 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb01c7cd_f8d5_414f_a9f1_cf75a7a6ac1b.slice/crio-a0c33da39810d79191c3c0484d02be6463a58b5a7d97ee574b9a89e8084f2248 WatchSource:0}: Error finding container a0c33da39810d79191c3c0484d02be6463a58b5a7d97ee574b9a89e8084f2248: Status 404 returned error can't find the container with id a0c33da39810d79191c3c0484d02be6463a58b5a7d97ee574b9a89e8084f2248 Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.566791 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.662072 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.679168 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.691040 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v"] Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.696684 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccnvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-nbzhm_openstack-operators(7209eb4d-53dc-4c30-9b80-8863acbea5a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.697678 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj"] Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.697765 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" podUID="7209eb4d-53dc-4c30-9b80-8863acbea5a6" Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.707080 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a4af572_980a_4c9b_8d01_df30e894dcda.slice/crio-428900a9d0edad546308684de52e9c84ccf127b3a451c5d4fdb164e6409027f4 WatchSource:0}: Error finding container 428900a9d0edad546308684de52e9c84ccf127b3a451c5d4fdb164e6409027f4: Status 404 returned error can't find the container with id 428900a9d0edad546308684de52e9c84ccf127b3a451c5d4fdb164e6409027f4 Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.707465 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded127163_4a57_4b95_9dd7_4c856bd3d126.slice/crio-5556f4c98bdf6daeb2955aa0150da093918ef6dd79d05720f8b2ec4589ca3a22 WatchSource:0}: Error finding container 5556f4c98bdf6daeb2955aa0150da093918ef6dd79d05720f8b2ec4589ca3a22: Status 404 returned error can't find the container with id 5556f4c98bdf6daeb2955aa0150da093918ef6dd79d05720f8b2ec4589ca3a22 Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.712114 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:46fb7b4a0620de5ce9ebede828e56bae0fcbbcb74a6461be0610b23aed1d67ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2w4rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6469d85bcb-smn7v_openstack-operators(6a4af572-980a-4c9b-8d01-df30e894dcda): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.712123 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:fd7ade59ba5aaf96c8679a70075dba1c9bf8d76b69e29284020f7d0b98191d9f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ll6rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-74bd5457c5-95bcj_openstack-operators(ed127163-4a57-4b95-9dd7-4c856bd3d126): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.713559 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" podUID="6a4af572-980a-4c9b-8d01-df30e894dcda" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.713568 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" podUID="ed127163-4a57-4b95-9dd7-4c856bd3d126" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.714399 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm"] Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.714444 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:0776154fcc999881e27158fea114e094a2ecf632c0beea0c80d1f09aab9fbb53,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smqsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78c6bccb56-mggmh_openstack-operators(f50c1909-7ba3-4d92-9e4e-2cbd2602e340): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.715569 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" podUID="f50c1909-7ba3-4d92-9e4e-2cbd2602e340" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.718937 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.818584 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7"] Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.823979 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb"] Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.825511 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56a7e345_fce1_44a5_aab4_8d82293bd5ee.slice/crio-c63cac1eae9050a4c1ab39ca51f26f43570d91778b3597b75a54000147e542a8 WatchSource:0}: Error finding container c63cac1eae9050a4c1ab39ca51f26f43570d91778b3597b75a54000147e542a8: Status 404 returned error can't find the container with id c63cac1eae9050a4c1ab39ca51f26f43570d91778b3597b75a54000147e542a8 Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.829080 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00ccc719_ee01_4ff4_934b_6e6fbadaa57c.slice/crio-59e33fbc56d5f1b2538b097e8d3d709e5fb812183be9f8ca529f6de26b2f6a82 WatchSource:0}: Error finding container 59e33fbc56d5f1b2538b097e8d3d709e5fb812183be9f8ca529f6de26b2f6a82: Status 404 returned error can't find the container with id 59e33fbc56d5f1b2538b097e8d3d709e5fb812183be9f8ca529f6de26b2f6a82 Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.833445 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:2bd5475df9fd2078c60f254e531d4033db74e3c486c32cf9fdd02713e65f39b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8mcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-698b874cb5-4v5jb_openstack-operators(00ccc719-ee01-4ff4-934b-6e6fbadaa57c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.834643 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" podUID="00ccc719-ee01-4ff4-934b-6e6fbadaa57c" Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.882431 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.882641 4580 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.882763 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert podName:1135f51b-1f4e-4866-bb7d-728be53f5be7 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:13.882743629 +0000 UTC m=+752.926962319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert") pod "infra-operator-controller-manager-77c48c7859-2sg8z" (UID: "1135f51b-1f4e-4866-bb7d-728be53f5be7") : secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:11 crc kubenswrapper[4580]: I0112 13:19:11.888856 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m"] Jan 12 13:19:11 crc kubenswrapper[4580]: W0112 13:19:11.889489 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod520c9385_c952_45a9_b1ce_2ad913758239.slice/crio-ad9e21376ebe1b9da7a014afb8d6ef26ce98f62a437f453711224d5d92261c3b WatchSource:0}: Error finding container ad9e21376ebe1b9da7a014afb8d6ef26ce98f62a437f453711224d5d92261c3b: Status 404 returned error can't find the container with id ad9e21376ebe1b9da7a014afb8d6ef26ce98f62a437f453711224d5d92261c3b Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.891793 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89k4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-p4m8m_openstack-operators(520c9385-c952-45a9-b1ce-2ad913758239): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 12 13:19:11 crc kubenswrapper[4580]: E0112 13:19:11.893840 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" podUID="520c9385-c952-45a9-b1ce-2ad913758239" Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.086034 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.086223 4580 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.086297 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert podName:ccb61890-3cf7-45aa-974c-693f0d14c14a nodeName:}" failed. No retries permitted until 2026-01-12 13:19:14.086279089 +0000 UTC m=+753.130497780 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert") pod "openstack-baremetal-operator-controller-manager-654686dcb9z5ths" (UID: "ccb61890-3cf7-45aa-974c-693f0d14c14a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.311064 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" event={"ID":"f50c1909-7ba3-4d92-9e4e-2cbd2602e340","Type":"ContainerStarted","Data":"be34ea3e9d2e33093099c1435b996e9daa7e6883abfa9f51766a6c61513fb974"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.312341 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" event={"ID":"8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01","Type":"ContainerStarted","Data":"13424fb9ffe948202e493ef39f365599d3e1fab5742f983b789827a563d72604"} Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.313301 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:0776154fcc999881e27158fea114e094a2ecf632c0beea0c80d1f09aab9fbb53\\\"\"" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" podUID="f50c1909-7ba3-4d92-9e4e-2cbd2602e340" Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.315568 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" event={"ID":"b3716289-2aa2-4e39-b8db-7980564c976e","Type":"ContainerStarted","Data":"14dcaacbd2789649575ca5158b4176fcf5e6c74350681a068a72a63275e5841b"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.318527 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" event={"ID":"56a7e345-fce1-44a5-aab4-8d82293bd5ee","Type":"ContainerStarted","Data":"c63cac1eae9050a4c1ab39ca51f26f43570d91778b3597b75a54000147e542a8"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.319772 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" event={"ID":"520c9385-c952-45a9-b1ce-2ad913758239","Type":"ContainerStarted","Data":"ad9e21376ebe1b9da7a014afb8d6ef26ce98f62a437f453711224d5d92261c3b"} Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.321051 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" podUID="520c9385-c952-45a9-b1ce-2ad913758239" Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.321601 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" event={"ID":"742c889f-d87d-4d61-82f8-2fa3ffc3d6b2","Type":"ContainerStarted","Data":"00bafbc38824a53c752525078ae769f31a7f83563c88189e6278be4607112dce"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.326260 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" event={"ID":"0286f995-6c82-4417-8a67-91b5e261a211","Type":"ContainerStarted","Data":"127bef9da7c694acc4d3636a7786816114ae4c502dafd69ccd1d947c04b9af98"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.330981 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" event={"ID":"87188751-ba97-4f25-ba2c-70514594cb4a","Type":"ContainerStarted","Data":"7df5eb906b3bcab9903cbd044ce01d84b01b7a7c4d52be7cd5322568793d6144"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.332298 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" event={"ID":"2d0f98f6-67ec-4253-a344-8aa185679126","Type":"ContainerStarted","Data":"40f3afbc9c72b30fbdc45ca10b775b2f60e87d6a060013c6a0d61dc1892b86b0"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.336284 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" event={"ID":"ed127163-4a57-4b95-9dd7-4c856bd3d126","Type":"ContainerStarted","Data":"5556f4c98bdf6daeb2955aa0150da093918ef6dd79d05720f8b2ec4589ca3a22"} Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.337528 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:fd7ade59ba5aaf96c8679a70075dba1c9bf8d76b69e29284020f7d0b98191d9f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" podUID="ed127163-4a57-4b95-9dd7-4c856bd3d126" Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.342283 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" event={"ID":"bf14de2d-3f35-4c32-905c-0a133a4fbafe","Type":"ContainerStarted","Data":"e259561d89feb07aba6624f074b5a5c1bcef70b238e2f7dfded89597c57177dd"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.343555 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" event={"ID":"00ccc719-ee01-4ff4-934b-6e6fbadaa57c","Type":"ContainerStarted","Data":"59e33fbc56d5f1b2538b097e8d3d709e5fb812183be9f8ca529f6de26b2f6a82"} Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.345026 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:2bd5475df9fd2078c60f254e531d4033db74e3c486c32cf9fdd02713e65f39b2\\\"\"" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" podUID="00ccc719-ee01-4ff4-934b-6e6fbadaa57c" Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.345133 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" event={"ID":"6a4af572-980a-4c9b-8d01-df30e894dcda","Type":"ContainerStarted","Data":"428900a9d0edad546308684de52e9c84ccf127b3a451c5d4fdb164e6409027f4"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.346887 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" event={"ID":"218c7ab4-85b0-4609-87e6-35d51283e5e0","Type":"ContainerStarted","Data":"360d832e69dbdcf5ad04f368e1b8c20dd9263bdc5e9ed736eed74fec4348a4a5"} Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.347789 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:46fb7b4a0620de5ce9ebede828e56bae0fcbbcb74a6461be0610b23aed1d67ca\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" podUID="6a4af572-980a-4c9b-8d01-df30e894dcda" Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.349339 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" event={"ID":"726a74db-a499-4c38-8258-b711bc0dc30b","Type":"ContainerStarted","Data":"8dcd4f0a8a7d21c7ab62b58276a7c74191b49409d79c65a440871c721d471898"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.351884 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" event={"ID":"eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b","Type":"ContainerStarted","Data":"a0c33da39810d79191c3c0484d02be6463a58b5a7d97ee574b9a89e8084f2248"} Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.355372 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" event={"ID":"7209eb4d-53dc-4c30-9b80-8863acbea5a6","Type":"ContainerStarted","Data":"e1ca1eb26c14afc8c7780dd660e52d2551799b1dec307a43e4675027eb5e830d"} Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.357054 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" podUID="7209eb4d-53dc-4c30-9b80-8863acbea5a6" Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.492611 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:12 crc kubenswrapper[4580]: I0112 13:19:12.492660 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.492922 4580 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.492975 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:14.492960869 +0000 UTC m=+753.537179558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "webhook-server-cert" not found Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.493391 4580 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 12 13:19:12 crc kubenswrapper[4580]: E0112 13:19:12.493428 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:14.493418769 +0000 UTC m=+753.537637459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "metrics-server-cert" not found Jan 12 13:19:13 crc kubenswrapper[4580]: E0112 13:19:13.363661 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:0776154fcc999881e27158fea114e094a2ecf632c0beea0c80d1f09aab9fbb53\\\"\"" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" podUID="f50c1909-7ba3-4d92-9e4e-2cbd2602e340" Jan 12 13:19:13 crc kubenswrapper[4580]: E0112 13:19:13.364096 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:2bd5475df9fd2078c60f254e531d4033db74e3c486c32cf9fdd02713e65f39b2\\\"\"" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" podUID="00ccc719-ee01-4ff4-934b-6e6fbadaa57c" Jan 12 13:19:13 crc kubenswrapper[4580]: E0112 13:19:13.364222 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" podUID="7209eb4d-53dc-4c30-9b80-8863acbea5a6" Jan 12 13:19:13 crc kubenswrapper[4580]: E0112 13:19:13.364388 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:fd7ade59ba5aaf96c8679a70075dba1c9bf8d76b69e29284020f7d0b98191d9f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" podUID="ed127163-4a57-4b95-9dd7-4c856bd3d126" Jan 12 13:19:13 crc kubenswrapper[4580]: E0112 13:19:13.364827 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" podUID="520c9385-c952-45a9-b1ce-2ad913758239" Jan 12 13:19:13 crc kubenswrapper[4580]: E0112 13:19:13.364921 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:46fb7b4a0620de5ce9ebede828e56bae0fcbbcb74a6461be0610b23aed1d67ca\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" podUID="6a4af572-980a-4c9b-8d01-df30e894dcda" Jan 12 13:19:13 crc kubenswrapper[4580]: I0112 13:19:13.923019 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:13 crc kubenswrapper[4580]: E0112 13:19:13.923305 4580 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:13 crc kubenswrapper[4580]: E0112 13:19:13.923476 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert podName:1135f51b-1f4e-4866-bb7d-728be53f5be7 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:17.923441128 +0000 UTC m=+756.967659818 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert") pod "infra-operator-controller-manager-77c48c7859-2sg8z" (UID: "1135f51b-1f4e-4866-bb7d-728be53f5be7") : secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:14 crc kubenswrapper[4580]: I0112 13:19:14.126481 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:14 crc kubenswrapper[4580]: E0112 13:19:14.126699 4580 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:14 crc kubenswrapper[4580]: E0112 13:19:14.126774 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert podName:ccb61890-3cf7-45aa-974c-693f0d14c14a nodeName:}" failed. No retries permitted until 2026-01-12 13:19:18.126757587 +0000 UTC m=+757.170976277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert") pod "openstack-baremetal-operator-controller-manager-654686dcb9z5ths" (UID: "ccb61890-3cf7-45aa-974c-693f0d14c14a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:14 crc kubenswrapper[4580]: I0112 13:19:14.529808 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:14 crc kubenswrapper[4580]: I0112 13:19:14.529847 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:14 crc kubenswrapper[4580]: E0112 13:19:14.529972 4580 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 12 13:19:14 crc kubenswrapper[4580]: E0112 13:19:14.530017 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:18.530002027 +0000 UTC m=+757.574220717 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "metrics-server-cert" not found Jan 12 13:19:14 crc kubenswrapper[4580]: E0112 13:19:14.530247 4580 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 12 13:19:14 crc kubenswrapper[4580]: E0112 13:19:14.530311 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:18.530300498 +0000 UTC m=+757.574519188 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "webhook-server-cert" not found Jan 12 13:19:17 crc kubenswrapper[4580]: I0112 13:19:17.977840 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:17 crc kubenswrapper[4580]: E0112 13:19:17.978152 4580 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:17 crc kubenswrapper[4580]: E0112 13:19:17.978281 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert podName:1135f51b-1f4e-4866-bb7d-728be53f5be7 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:25.978252107 +0000 UTC m=+765.022470797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert") pod "infra-operator-controller-manager-77c48c7859-2sg8z" (UID: "1135f51b-1f4e-4866-bb7d-728be53f5be7") : secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:18 crc kubenswrapper[4580]: I0112 13:19:18.180424 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:18 crc kubenswrapper[4580]: E0112 13:19:18.180590 4580 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:18 crc kubenswrapper[4580]: E0112 13:19:18.180658 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert podName:ccb61890-3cf7-45aa-974c-693f0d14c14a nodeName:}" failed. No retries permitted until 2026-01-12 13:19:26.180640211 +0000 UTC m=+765.224858900 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert") pod "openstack-baremetal-operator-controller-manager-654686dcb9z5ths" (UID: "ccb61890-3cf7-45aa-974c-693f0d14c14a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:18 crc kubenswrapper[4580]: I0112 13:19:18.585718 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:18 crc kubenswrapper[4580]: I0112 13:19:18.585770 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:18 crc kubenswrapper[4580]: E0112 13:19:18.585943 4580 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 12 13:19:18 crc kubenswrapper[4580]: E0112 13:19:18.586000 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:26.585985539 +0000 UTC m=+765.630204229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "metrics-server-cert" not found Jan 12 13:19:18 crc kubenswrapper[4580]: E0112 13:19:18.586240 4580 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 12 13:19:18 crc kubenswrapper[4580]: E0112 13:19:18.586411 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:26.586361986 +0000 UTC m=+765.630580676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "webhook-server-cert" not found Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.408740 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" event={"ID":"63a3c1f8-84b5-4648-9a74-bc1e980d5a57","Type":"ContainerStarted","Data":"8c7d11f9dfc77aa729bfb0d9ff0a859cc1feebaaf49b3d40d784c6235304f2f7"} Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.409437 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.413630 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" event={"ID":"7ed21cbb-5825-4538-bfb6-74f895189d83","Type":"ContainerStarted","Data":"5e95888b473f4d2c82be333e1cc332050766b4ccb7616978a954364010caa0a7"} Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.414590 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.417022 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" event={"ID":"218c7ab4-85b0-4609-87e6-35d51283e5e0","Type":"ContainerStarted","Data":"8b9d833891bea9a3585f9e4b18a94af4cf4b79d43ba8754d487d3daa45d6f8b1"} Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.418144 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.424362 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" podStartSLOduration=1.408911052 podStartE2EDuration="10.424350818s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.040515989 +0000 UTC m=+750.084734680" lastFinishedPulling="2026-01-12 13:19:20.055955756 +0000 UTC m=+759.100174446" observedRunningTime="2026-01-12 13:19:20.423063159 +0000 UTC m=+759.467281850" watchObservedRunningTime="2026-01-12 13:19:20.424350818 +0000 UTC m=+759.468569508" Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.428438 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.444222 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" podStartSLOduration=1.7735563490000001 podStartE2EDuration="10.444212496s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:10.979878899 +0000 UTC m=+750.024097589" lastFinishedPulling="2026-01-12 13:19:19.650535045 +0000 UTC m=+758.694753736" observedRunningTime="2026-01-12 13:19:20.442194955 +0000 UTC m=+759.486413645" watchObservedRunningTime="2026-01-12 13:19:20.444212496 +0000 UTC m=+759.488431187" Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.458411 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" podStartSLOduration=1.6851870839999998 podStartE2EDuration="10.458400825s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.312152123 +0000 UTC m=+750.356370813" lastFinishedPulling="2026-01-12 13:19:20.085365864 +0000 UTC m=+759.129584554" observedRunningTime="2026-01-12 13:19:20.454471952 +0000 UTC m=+759.498691024" watchObservedRunningTime="2026-01-12 13:19:20.458400825 +0000 UTC m=+759.502619514" Jan 12 13:19:20 crc kubenswrapper[4580]: I0112 13:19:20.472087 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" podStartSLOduration=1.681200721 podStartE2EDuration="10.472060398s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.300460055 +0000 UTC m=+750.344678744" lastFinishedPulling="2026-01-12 13:19:20.091319731 +0000 UTC m=+759.135538421" observedRunningTime="2026-01-12 13:19:20.468271039 +0000 UTC m=+759.512489729" watchObservedRunningTime="2026-01-12 13:19:20.472060398 +0000 UTC m=+759.516279098" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.438614 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" event={"ID":"726a74db-a499-4c38-8258-b711bc0dc30b","Type":"ContainerStarted","Data":"c4ece1349025700d56a7b57fab211e62a58d3f778a18e5b60b5aaab65c62f291"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.438809 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.440467 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" event={"ID":"eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b","Type":"ContainerStarted","Data":"f1c0bc01c893c1d7eab636f006ea7c72a8ce29b101cf6fb86343ed893700e615"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.440604 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.442305 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" event={"ID":"2d0f98f6-67ec-4253-a344-8aa185679126","Type":"ContainerStarted","Data":"b2f24eb0baf4b2cb4a2088cc8777374a3f48192282b6f11af3b868c7c9c33b08"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.442391 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.443785 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" event={"ID":"56a7e345-fce1-44a5-aab4-8d82293bd5ee","Type":"ContainerStarted","Data":"ed88223dcb35578ed9764f51d1463cdfd5788e7f613af93a5a3f55c8765b3b87"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.443891 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.445401 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" event={"ID":"742c889f-d87d-4d61-82f8-2fa3ffc3d6b2","Type":"ContainerStarted","Data":"91e39f714e8efbe17cf8e26f4d0c72acb0e4ce0eb3c8be9577bac40aa064fbfe"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.445522 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.446994 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" event={"ID":"cbfff7ce-c184-4dee-94d5-c6ee41fc2b75","Type":"ContainerStarted","Data":"6b146372e4bf4ec537ca7c65e06711930fa533949d7194700964eb08dfcbc40a"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.447149 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.448210 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" event={"ID":"b3716289-2aa2-4e39-b8db-7980564c976e","Type":"ContainerStarted","Data":"7bf3c4cba996ed910765f0bac1e6d7ddeda1e41841ef62913e650368722b687a"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.449268 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" event={"ID":"87188751-ba97-4f25-ba2c-70514594cb4a","Type":"ContainerStarted","Data":"1c1b12c9e3f019c474c38d1334c2b18b5152c8d9ef7b5d7db5a75f1be8d0c0a9"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.449394 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.450320 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" event={"ID":"8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01","Type":"ContainerStarted","Data":"83b0cbe30a38e93853b0161875d35b8e8ca06d7664359d59b48610bc0fb6ba9a"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.450388 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.451355 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" event={"ID":"bf14de2d-3f35-4c32-905c-0a133a4fbafe","Type":"ContainerStarted","Data":"f6c0243f4f047e58789f729faa8217af2646717cd50a25bf897fe35f26c3bb40"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.451394 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.452552 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" event={"ID":"0286f995-6c82-4417-8a67-91b5e261a211","Type":"ContainerStarted","Data":"f750b7140bd6f60d6256085f6b66958efdf33c9cd64d7e7996734156225772e2"} Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.461472 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" podStartSLOduration=2.767607038 podStartE2EDuration="11.461460861s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.443321455 +0000 UTC m=+750.487540145" lastFinishedPulling="2026-01-12 13:19:20.137175279 +0000 UTC m=+759.181393968" observedRunningTime="2026-01-12 13:19:21.459927609 +0000 UTC m=+760.504146299" watchObservedRunningTime="2026-01-12 13:19:21.461460861 +0000 UTC m=+760.505679550" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.477646 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" podStartSLOduration=2.9488772130000003 podStartE2EDuration="11.477636201s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.556596366 +0000 UTC m=+750.600815056" lastFinishedPulling="2026-01-12 13:19:20.085355354 +0000 UTC m=+759.129574044" observedRunningTime="2026-01-12 13:19:21.475963078 +0000 UTC m=+760.520181769" watchObservedRunningTime="2026-01-12 13:19:21.477636201 +0000 UTC m=+760.521854891" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.496983 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" podStartSLOduration=2.925201118 podStartE2EDuration="11.496969466s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.556599963 +0000 UTC m=+750.600818653" lastFinishedPulling="2026-01-12 13:19:20.12836831 +0000 UTC m=+759.172587001" observedRunningTime="2026-01-12 13:19:21.49590697 +0000 UTC m=+760.540125660" watchObservedRunningTime="2026-01-12 13:19:21.496969466 +0000 UTC m=+760.541188157" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.519022 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" podStartSLOduration=3.258277653 podStartE2EDuration="11.519014006s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.82914342 +0000 UTC m=+750.873362111" lastFinishedPulling="2026-01-12 13:19:20.089879775 +0000 UTC m=+759.134098464" observedRunningTime="2026-01-12 13:19:21.50671659 +0000 UTC m=+760.550935280" watchObservedRunningTime="2026-01-12 13:19:21.519014006 +0000 UTC m=+760.563232686" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.532508 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" podStartSLOduration=3.026615776 podStartE2EDuration="11.532481981s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.575342117 +0000 UTC m=+750.619560807" lastFinishedPulling="2026-01-12 13:19:20.081208322 +0000 UTC m=+759.125427012" observedRunningTime="2026-01-12 13:19:21.530616064 +0000 UTC m=+760.574834755" watchObservedRunningTime="2026-01-12 13:19:21.532481981 +0000 UTC m=+760.576700670" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.536020 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" podStartSLOduration=3.141228258 podStartE2EDuration="11.53600527s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.682682488 +0000 UTC m=+750.726901177" lastFinishedPulling="2026-01-12 13:19:20.077459498 +0000 UTC m=+759.121678189" observedRunningTime="2026-01-12 13:19:21.517012725 +0000 UTC m=+760.561231416" watchObservedRunningTime="2026-01-12 13:19:21.53600527 +0000 UTC m=+760.580223950" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.547207 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" podStartSLOduration=2.913106124 podStartE2EDuration="11.54718169s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.444275989 +0000 UTC m=+750.488494679" lastFinishedPulling="2026-01-12 13:19:20.078351556 +0000 UTC m=+759.122570245" observedRunningTime="2026-01-12 13:19:21.54487179 +0000 UTC m=+760.589090480" watchObservedRunningTime="2026-01-12 13:19:21.54718169 +0000 UTC m=+760.591400370" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.561503 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" podStartSLOduration=2.58379277 podStartE2EDuration="11.561472841s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.107773519 +0000 UTC m=+750.151992209" lastFinishedPulling="2026-01-12 13:19:20.085453589 +0000 UTC m=+759.129672280" observedRunningTime="2026-01-12 13:19:21.558034401 +0000 UTC m=+760.602253091" watchObservedRunningTime="2026-01-12 13:19:21.561472841 +0000 UTC m=+760.605691531" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.590269 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" podStartSLOduration=2.938547285 podStartE2EDuration="11.59023919s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.425780628 +0000 UTC m=+750.469999318" lastFinishedPulling="2026-01-12 13:19:20.077472533 +0000 UTC m=+759.121691223" observedRunningTime="2026-01-12 13:19:21.570953815 +0000 UTC m=+760.615172505" watchObservedRunningTime="2026-01-12 13:19:21.59023919 +0000 UTC m=+760.634457869" Jan 12 13:19:21 crc kubenswrapper[4580]: I0112 13:19:21.615573 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" podStartSLOduration=3.210762847 podStartE2EDuration="11.615553142s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.689096028 +0000 UTC m=+750.733314718" lastFinishedPulling="2026-01-12 13:19:20.093886322 +0000 UTC m=+759.138105013" observedRunningTime="2026-01-12 13:19:21.606414511 +0000 UTC m=+760.650633201" watchObservedRunningTime="2026-01-12 13:19:21.615553142 +0000 UTC m=+760.659771832" Jan 12 13:19:22 crc kubenswrapper[4580]: I0112 13:19:22.461618 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" Jan 12 13:19:26 crc kubenswrapper[4580]: I0112 13:19:26.008284 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:26 crc kubenswrapper[4580]: E0112 13:19:26.008462 4580 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:26 crc kubenswrapper[4580]: E0112 13:19:26.008846 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert podName:1135f51b-1f4e-4866-bb7d-728be53f5be7 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:42.008830139 +0000 UTC m=+781.053048830 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert") pod "infra-operator-controller-manager-77c48c7859-2sg8z" (UID: "1135f51b-1f4e-4866-bb7d-728be53f5be7") : secret "infra-operator-webhook-server-cert" not found Jan 12 13:19:26 crc kubenswrapper[4580]: I0112 13:19:26.211388 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:26 crc kubenswrapper[4580]: E0112 13:19:26.211556 4580 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:26 crc kubenswrapper[4580]: E0112 13:19:26.211622 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert podName:ccb61890-3cf7-45aa-974c-693f0d14c14a nodeName:}" failed. No retries permitted until 2026-01-12 13:19:42.211607996 +0000 UTC m=+781.255826686 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert") pod "openstack-baremetal-operator-controller-manager-654686dcb9z5ths" (UID: "ccb61890-3cf7-45aa-974c-693f0d14c14a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 12 13:19:26 crc kubenswrapper[4580]: I0112 13:19:26.618232 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:26 crc kubenswrapper[4580]: I0112 13:19:26.618267 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:26 crc kubenswrapper[4580]: E0112 13:19:26.618803 4580 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 12 13:19:26 crc kubenswrapper[4580]: E0112 13:19:26.618978 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs podName:87237fc1-15cd-4dd9-bcfe-5a334d366896 nodeName:}" failed. No retries permitted until 2026-01-12 13:19:42.618962218 +0000 UTC m=+781.663180907 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs") pod "openstack-operator-controller-manager-6659c7dc85-4p8jr" (UID: "87237fc1-15cd-4dd9-bcfe-5a334d366896") : secret "metrics-server-cert" not found Jan 12 13:19:26 crc kubenswrapper[4580]: I0112 13:19:26.623617 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-webhook-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.517869 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" event={"ID":"520c9385-c952-45a9-b1ce-2ad913758239","Type":"ContainerStarted","Data":"e24558db71abea552e7af3fad871377fbf0688400de8ce0346ee5a6267083658"} Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.522526 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" event={"ID":"00ccc719-ee01-4ff4-934b-6e6fbadaa57c","Type":"ContainerStarted","Data":"a94b847f5a8487b76edb7744141055366bafd5bd8e0c0a6006095491221a287a"} Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.522864 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.525650 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" event={"ID":"f50c1909-7ba3-4d92-9e4e-2cbd2602e340","Type":"ContainerStarted","Data":"d8584f14616c67195140dc65f603e4e76e517d342d681bc5c9f47c36740d39cc"} Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.526870 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.529121 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" event={"ID":"ed127163-4a57-4b95-9dd7-4c856bd3d126","Type":"ContainerStarted","Data":"b87cdf29bf96191dbe660c054a9edc25a74ed5c24b89f54f88edef32fa014239"} Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.529723 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.531333 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" event={"ID":"7209eb4d-53dc-4c30-9b80-8863acbea5a6","Type":"ContainerStarted","Data":"58f291c7e3fb61eb72de8be9f61571e17168bdf458929f5bbbd08f633a7188e5"} Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.531694 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.548610 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-p4m8m" podStartSLOduration=2.244608162 podStartE2EDuration="19.548580618s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.891630987 +0000 UTC m=+750.935849678" lastFinishedPulling="2026-01-12 13:19:29.195603444 +0000 UTC m=+768.239822134" observedRunningTime="2026-01-12 13:19:29.542484734 +0000 UTC m=+768.586703445" watchObservedRunningTime="2026-01-12 13:19:29.548580618 +0000 UTC m=+768.592799308" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.567151 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" podStartSLOduration=2.229754293 podStartE2EDuration="19.567133256s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.833251991 +0000 UTC m=+750.877470680" lastFinishedPulling="2026-01-12 13:19:29.170630954 +0000 UTC m=+768.214849643" observedRunningTime="2026-01-12 13:19:29.566762669 +0000 UTC m=+768.610981360" watchObservedRunningTime="2026-01-12 13:19:29.567133256 +0000 UTC m=+768.611351945" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.584337 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" podStartSLOduration=2.069254303 podStartE2EDuration="19.584311984s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.696543881 +0000 UTC m=+750.740762571" lastFinishedPulling="2026-01-12 13:19:29.211601562 +0000 UTC m=+768.255820252" observedRunningTime="2026-01-12 13:19:29.582574368 +0000 UTC m=+768.626793058" watchObservedRunningTime="2026-01-12 13:19:29.584311984 +0000 UTC m=+768.628530674" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.597175 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" podStartSLOduration=2.138427061 podStartE2EDuration="19.597160013s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.711956781 +0000 UTC m=+750.756175471" lastFinishedPulling="2026-01-12 13:19:29.170689733 +0000 UTC m=+768.214908423" observedRunningTime="2026-01-12 13:19:29.596090193 +0000 UTC m=+768.640308883" watchObservedRunningTime="2026-01-12 13:19:29.597160013 +0000 UTC m=+768.641378703" Jan 12 13:19:29 crc kubenswrapper[4580]: I0112 13:19:29.610493 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" podStartSLOduration=2.11305558 podStartE2EDuration="19.610479158s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.714297539 +0000 UTC m=+750.758516229" lastFinishedPulling="2026-01-12 13:19:29.211721118 +0000 UTC m=+768.255939807" observedRunningTime="2026-01-12 13:19:29.606529938 +0000 UTC m=+768.650748628" watchObservedRunningTime="2026-01-12 13:19:29.610479158 +0000 UTC m=+768.654697849" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.370755 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-4b7c9" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.400656 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6c697f55f8-69mz9" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.440646 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-plhvp" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.449033 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-75b858dccc-nr2g4" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.506580 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-6cd7bcb4bf-nvbml" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.530950 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-75cb9467dc-r22fp" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.552709 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-plnl2" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.568941 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fckr8" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.649845 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6684f856f9-w2xhg" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.657552 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8fxxj" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.657631 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ckfhs" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.679819 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5977959f9c-sgg8q" Jan 12 13:19:30 crc kubenswrapper[4580]: I0112 13:19:30.736158 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-cf664874d-vznwd" Jan 12 13:19:31 crc kubenswrapper[4580]: I0112 13:19:31.058932 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-2z5v7" Jan 12 13:19:31 crc kubenswrapper[4580]: I0112 13:19:31.556361 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" event={"ID":"6a4af572-980a-4c9b-8d01-df30e894dcda","Type":"ContainerStarted","Data":"4182ef15aa6f47da73d7792335b628433691138712db5e84b2c7a5664421b5c5"} Jan 12 13:19:31 crc kubenswrapper[4580]: I0112 13:19:31.556567 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" Jan 12 13:19:31 crc kubenswrapper[4580]: I0112 13:19:31.580886 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" podStartSLOduration=2.471354773 podStartE2EDuration="21.58086556s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:11.712010552 +0000 UTC m=+750.756229241" lastFinishedPulling="2026-01-12 13:19:30.821521338 +0000 UTC m=+769.865740028" observedRunningTime="2026-01-12 13:19:31.574347133 +0000 UTC m=+770.618565823" watchObservedRunningTime="2026-01-12 13:19:31.58086556 +0000 UTC m=+770.625084250" Jan 12 13:19:40 crc kubenswrapper[4580]: I0112 13:19:40.721139 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-nbzhm" Jan 12 13:19:40 crc kubenswrapper[4580]: I0112 13:19:40.756532 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78c6bccb56-mggmh" Jan 12 13:19:40 crc kubenswrapper[4580]: I0112 13:19:40.765134 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6469d85bcb-smn7v" Jan 12 13:19:40 crc kubenswrapper[4580]: I0112 13:19:40.815598 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-74bd5457c5-95bcj" Jan 12 13:19:40 crc kubenswrapper[4580]: I0112 13:19:40.893008 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-698b874cb5-4v5jb" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.104163 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.109252 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1135f51b-1f4e-4866-bb7d-728be53f5be7-cert\") pod \"infra-operator-controller-manager-77c48c7859-2sg8z\" (UID: \"1135f51b-1f4e-4866-bb7d-728be53f5be7\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.305572 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.308973 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb61890-3cf7-45aa-974c-693f0d14c14a-cert\") pod \"openstack-baremetal-operator-controller-manager-654686dcb9z5ths\" (UID: \"ccb61890-3cf7-45aa-974c-693f0d14c14a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.312178 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kcmjq" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.320623 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.538987 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d8c85" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.548233 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.685857 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z"] Jan 12 13:19:42 crc kubenswrapper[4580]: W0112 13:19:42.689763 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1135f51b_1f4e_4866_bb7d_728be53f5be7.slice/crio-8812f9471c48a680d3a7f9406a1cd2811fb9995cfb5a3c7ff479ccaae4e11e52 WatchSource:0}: Error finding container 8812f9471c48a680d3a7f9406a1cd2811fb9995cfb5a3c7ff479ccaae4e11e52: Status 404 returned error can't find the container with id 8812f9471c48a680d3a7f9406a1cd2811fb9995cfb5a3c7ff479ccaae4e11e52 Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.712892 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.716965 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87237fc1-15cd-4dd9-bcfe-5a334d366896-metrics-certs\") pod \"openstack-operator-controller-manager-6659c7dc85-4p8jr\" (UID: \"87237fc1-15cd-4dd9-bcfe-5a334d366896\") " pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.719919 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cgjlj" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.729165 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:42 crc kubenswrapper[4580]: I0112 13:19:42.895773 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths"] Jan 12 13:19:42 crc kubenswrapper[4580]: W0112 13:19:42.898313 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccb61890_3cf7_45aa_974c_693f0d14c14a.slice/crio-9944b259403180a574c04b0848481f6406308016dbd55e46f730e9ab4e2ec33a WatchSource:0}: Error finding container 9944b259403180a574c04b0848481f6406308016dbd55e46f730e9ab4e2ec33a: Status 404 returned error can't find the container with id 9944b259403180a574c04b0848481f6406308016dbd55e46f730e9ab4e2ec33a Jan 12 13:19:43 crc kubenswrapper[4580]: I0112 13:19:43.151880 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr"] Jan 12 13:19:43 crc kubenswrapper[4580]: W0112 13:19:43.160253 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87237fc1_15cd_4dd9_bcfe_5a334d366896.slice/crio-19ffa3149aa353e8e0bdb05e1fbeca201b0248659da4e6293e7166dd19e51bf9 WatchSource:0}: Error finding container 19ffa3149aa353e8e0bdb05e1fbeca201b0248659da4e6293e7166dd19e51bf9: Status 404 returned error can't find the container with id 19ffa3149aa353e8e0bdb05e1fbeca201b0248659da4e6293e7166dd19e51bf9 Jan 12 13:19:43 crc kubenswrapper[4580]: I0112 13:19:43.622649 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" event={"ID":"87237fc1-15cd-4dd9-bcfe-5a334d366896","Type":"ContainerStarted","Data":"82694148a96997bef3b81191c0764f496a8f1c5bc901c267989d5db684cf8557"} Jan 12 13:19:43 crc kubenswrapper[4580]: I0112 13:19:43.623004 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" event={"ID":"87237fc1-15cd-4dd9-bcfe-5a334d366896","Type":"ContainerStarted","Data":"19ffa3149aa353e8e0bdb05e1fbeca201b0248659da4e6293e7166dd19e51bf9"} Jan 12 13:19:43 crc kubenswrapper[4580]: I0112 13:19:43.623743 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:19:43 crc kubenswrapper[4580]: I0112 13:19:43.628314 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" event={"ID":"ccb61890-3cf7-45aa-974c-693f0d14c14a","Type":"ContainerStarted","Data":"9944b259403180a574c04b0848481f6406308016dbd55e46f730e9ab4e2ec33a"} Jan 12 13:19:43 crc kubenswrapper[4580]: I0112 13:19:43.633211 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" event={"ID":"1135f51b-1f4e-4866-bb7d-728be53f5be7","Type":"ContainerStarted","Data":"8812f9471c48a680d3a7f9406a1cd2811fb9995cfb5a3c7ff479ccaae4e11e52"} Jan 12 13:19:43 crc kubenswrapper[4580]: I0112 13:19:43.668852 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" podStartSLOduration=33.668834217 podStartE2EDuration="33.668834217s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:19:43.657478832 +0000 UTC m=+782.701697522" watchObservedRunningTime="2026-01-12 13:19:43.668834217 +0000 UTC m=+782.713052908" Jan 12 13:19:45 crc kubenswrapper[4580]: I0112 13:19:45.656370 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" event={"ID":"ccb61890-3cf7-45aa-974c-693f0d14c14a","Type":"ContainerStarted","Data":"37b0bf348dd52e0e7d49d7cd0e4164380be4a4da1d1438015f063c4326c60d14"} Jan 12 13:19:45 crc kubenswrapper[4580]: I0112 13:19:45.656903 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:45 crc kubenswrapper[4580]: I0112 13:19:45.658305 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" event={"ID":"1135f51b-1f4e-4866-bb7d-728be53f5be7","Type":"ContainerStarted","Data":"2ccec2b54bda7d43578e345a97e10c3bb7cf133f9fdfeeebd556481eaeff5ee2"} Jan 12 13:19:45 crc kubenswrapper[4580]: I0112 13:19:45.658569 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:45 crc kubenswrapper[4580]: I0112 13:19:45.681708 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" podStartSLOduration=33.532187694 podStartE2EDuration="35.681689814s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:42.899974478 +0000 UTC m=+781.944193168" lastFinishedPulling="2026-01-12 13:19:45.049476598 +0000 UTC m=+784.093695288" observedRunningTime="2026-01-12 13:19:45.680373672 +0000 UTC m=+784.724592353" watchObservedRunningTime="2026-01-12 13:19:45.681689814 +0000 UTC m=+784.725908505" Jan 12 13:19:45 crc kubenswrapper[4580]: I0112 13:19:45.698244 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" podStartSLOduration=33.84338056 podStartE2EDuration="35.698224071s" podCreationTimestamp="2026-01-12 13:19:10 +0000 UTC" firstStartedPulling="2026-01-12 13:19:42.691828495 +0000 UTC m=+781.736047185" lastFinishedPulling="2026-01-12 13:19:44.546672006 +0000 UTC m=+783.590890696" observedRunningTime="2026-01-12 13:19:45.69515754 +0000 UTC m=+784.739376230" watchObservedRunningTime="2026-01-12 13:19:45.698224071 +0000 UTC m=+784.742442761" Jan 12 13:19:46 crc kubenswrapper[4580]: I0112 13:19:46.949001 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:19:46 crc kubenswrapper[4580]: I0112 13:19:46.949114 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:19:52 crc kubenswrapper[4580]: I0112 13:19:52.325604 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-2sg8z" Jan 12 13:19:52 crc kubenswrapper[4580]: I0112 13:19:52.562819 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-654686dcb9z5ths" Jan 12 13:19:52 crc kubenswrapper[4580]: I0112 13:19:52.734893 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6659c7dc85-4p8jr" Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.852546 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-djs49"] Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.853803 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.858217 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.858262 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.858448 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.859679 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-pwhn2" Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.862527 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-djs49"] Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.915645 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-t7628"] Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.917464 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.919723 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 12 13:20:06 crc kubenswrapper[4580]: I0112 13:20:06.925665 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-t7628"] Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.011141 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210f1093-7755-4bc6-ab92-d7f769b8b5d1-config\") pod \"dnsmasq-dns-84bb9d8bd9-djs49\" (UID: \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.011405 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sr6b\" (UniqueName: \"kubernetes.io/projected/210f1093-7755-4bc6-ab92-d7f769b8b5d1-kube-api-access-4sr6b\") pod \"dnsmasq-dns-84bb9d8bd9-djs49\" (UID: \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.112902 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-config\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.112940 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-dns-svc\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.112965 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sr6b\" (UniqueName: \"kubernetes.io/projected/210f1093-7755-4bc6-ab92-d7f769b8b5d1-kube-api-access-4sr6b\") pod \"dnsmasq-dns-84bb9d8bd9-djs49\" (UID: \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.113011 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf8g9\" (UniqueName: \"kubernetes.io/projected/43e36388-f2bb-4ecb-802a-24f2b97d34b0-kube-api-access-tf8g9\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.113033 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210f1093-7755-4bc6-ab92-d7f769b8b5d1-config\") pod \"dnsmasq-dns-84bb9d8bd9-djs49\" (UID: \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.113892 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210f1093-7755-4bc6-ab92-d7f769b8b5d1-config\") pod \"dnsmasq-dns-84bb9d8bd9-djs49\" (UID: \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.128713 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sr6b\" (UniqueName: \"kubernetes.io/projected/210f1093-7755-4bc6-ab92-d7f769b8b5d1-kube-api-access-4sr6b\") pod \"dnsmasq-dns-84bb9d8bd9-djs49\" (UID: \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.167501 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.213965 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-config\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.214016 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-dns-svc\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.214072 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf8g9\" (UniqueName: \"kubernetes.io/projected/43e36388-f2bb-4ecb-802a-24f2b97d34b0-kube-api-access-tf8g9\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.215056 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-config\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.215157 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-dns-svc\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.227885 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf8g9\" (UniqueName: \"kubernetes.io/projected/43e36388-f2bb-4ecb-802a-24f2b97d34b0-kube-api-access-tf8g9\") pod \"dnsmasq-dns-5f854695bc-t7628\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.232276 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.519724 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-djs49"] Jan 12 13:20:07 crc kubenswrapper[4580]: W0112 13:20:07.522678 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod210f1093_7755_4bc6_ab92_d7f769b8b5d1.slice/crio-e240f2a095bcc8c9a7ec7a3e39ae893bf85203612b2351bfeab7ceeb265398c5 WatchSource:0}: Error finding container e240f2a095bcc8c9a7ec7a3e39ae893bf85203612b2351bfeab7ceeb265398c5: Status 404 returned error can't find the container with id e240f2a095bcc8c9a7ec7a3e39ae893bf85203612b2351bfeab7ceeb265398c5 Jan 12 13:20:07 crc kubenswrapper[4580]: W0112 13:20:07.583687 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43e36388_f2bb_4ecb_802a_24f2b97d34b0.slice/crio-69c620af22a81beb84d78977480abb7b5cb6ff55a0db14976e60a3472a80dee4 WatchSource:0}: Error finding container 69c620af22a81beb84d78977480abb7b5cb6ff55a0db14976e60a3472a80dee4: Status 404 returned error can't find the container with id 69c620af22a81beb84d78977480abb7b5cb6ff55a0db14976e60a3472a80dee4 Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.584318 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-t7628"] Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.775262 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-t7628" event={"ID":"43e36388-f2bb-4ecb-802a-24f2b97d34b0","Type":"ContainerStarted","Data":"69c620af22a81beb84d78977480abb7b5cb6ff55a0db14976e60a3472a80dee4"} Jan 12 13:20:07 crc kubenswrapper[4580]: I0112 13:20:07.776211 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" event={"ID":"210f1093-7755-4bc6-ab92-d7f769b8b5d1","Type":"ContainerStarted","Data":"e240f2a095bcc8c9a7ec7a3e39ae893bf85203612b2351bfeab7ceeb265398c5"} Jan 12 13:20:09 crc kubenswrapper[4580]: I0112 13:20:09.856262 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-t7628"] Jan 12 13:20:09 crc kubenswrapper[4580]: I0112 13:20:09.872394 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-wqfbd"] Jan 12 13:20:09 crc kubenswrapper[4580]: I0112 13:20:09.873345 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:09 crc kubenswrapper[4580]: I0112 13:20:09.884383 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-wqfbd"] Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.059828 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-config\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.059886 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdw5s\" (UniqueName: \"kubernetes.io/projected/37118f18-81ad-44c8-bc5e-8e8f3333193f-kube-api-access-wdw5s\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.059905 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.104273 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-djs49"] Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.121204 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-cpjhh"] Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.122130 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.133664 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-cpjhh"] Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.160593 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-config\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.160711 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdw5s\" (UniqueName: \"kubernetes.io/projected/37118f18-81ad-44c8-bc5e-8e8f3333193f-kube-api-access-wdw5s\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.160732 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.161596 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.161733 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-config\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.178365 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdw5s\" (UniqueName: \"kubernetes.io/projected/37118f18-81ad-44c8-bc5e-8e8f3333193f-kube-api-access-wdw5s\") pod \"dnsmasq-dns-744ffd65bc-wqfbd\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.197074 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.261901 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-config\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.262009 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-dns-svc\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.262112 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzwmh\" (UniqueName: \"kubernetes.io/projected/b55248fd-efe4-466d-b3a3-fa9177008120-kube-api-access-tzwmh\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.363728 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzwmh\" (UniqueName: \"kubernetes.io/projected/b55248fd-efe4-466d-b3a3-fa9177008120-kube-api-access-tzwmh\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.363839 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-config\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.363864 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-dns-svc\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.364741 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-dns-svc\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.365187 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-config\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.382658 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzwmh\" (UniqueName: \"kubernetes.io/projected/b55248fd-efe4-466d-b3a3-fa9177008120-kube-api-access-tzwmh\") pod \"dnsmasq-dns-95f5f6995-cpjhh\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:10 crc kubenswrapper[4580]: I0112 13:20:10.441534 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.032638 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.033711 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.037961 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.040267 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.040298 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.040326 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.040401 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lwp97" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.040660 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.040719 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.040729 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.189650 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.189696 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.189715 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.189741 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.189764 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.189900 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.189936 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/20148d96-39b6-4278-9d29-91874ad352a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.189961 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmjh7\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-kube-api-access-wmjh7\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.190070 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/20148d96-39b6-4278-9d29-91874ad352a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.190206 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.190249 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.242746 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.243699 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.248229 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.249634 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5mn6v" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.249958 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.250148 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.250330 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.250343 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.250432 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.254086 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.291863 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.291898 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.291923 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.291946 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.291963 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.291985 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.292003 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.292033 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.292050 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/20148d96-39b6-4278-9d29-91874ad352a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.292070 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmjh7\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-kube-api-access-wmjh7\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.292093 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/20148d96-39b6-4278-9d29-91874ad352a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.293052 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.293481 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.293520 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.294207 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.294786 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.299612 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/20148d96-39b6-4278-9d29-91874ad352a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.299892 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.303351 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.304217 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/20148d96-39b6-4278-9d29-91874ad352a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.306762 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.314762 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmjh7\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-kube-api-access-wmjh7\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.323974 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.354992 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.393436 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ee1d970-f295-46eb-91eb-70a45cb019c1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.393482 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbf9n\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-kube-api-access-kbf9n\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.393503 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.393519 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.393762 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ee1d970-f295-46eb-91eb-70a45cb019c1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.393788 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.393993 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.394039 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.394134 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.394181 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.394197 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495771 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495828 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495853 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495867 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495888 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ee1d970-f295-46eb-91eb-70a45cb019c1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495910 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbf9n\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-kube-api-access-kbf9n\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495927 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495941 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495960 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ee1d970-f295-46eb-91eb-70a45cb019c1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.495975 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.496027 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.496455 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.496782 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.498919 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.499039 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.499443 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.499487 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.499575 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.499872 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ee1d970-f295-46eb-91eb-70a45cb019c1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.502310 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.502871 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ee1d970-f295-46eb-91eb-70a45cb019c1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.509943 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbf9n\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-kube-api-access-kbf9n\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.515158 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:11 crc kubenswrapper[4580]: I0112 13:20:11.560740 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.314300 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.315293 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.317391 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.317433 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.317757 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.318696 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8xfh6" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.322662 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.324410 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.407115 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.407156 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ceae97e-0cf6-4019-90ba-931df3f6dbed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.407185 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.407211 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ceae97e-0cf6-4019-90ba-931df3f6dbed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.407389 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftqbq\" (UniqueName: \"kubernetes.io/projected/2ceae97e-0cf6-4019-90ba-931df3f6dbed-kube-api-access-ftqbq\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.407426 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ceae97e-0cf6-4019-90ba-931df3f6dbed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.407451 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.407639 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.508968 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.509009 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ceae97e-0cf6-4019-90ba-931df3f6dbed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.509037 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.509054 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ceae97e-0cf6-4019-90ba-931df3f6dbed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.509085 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftqbq\" (UniqueName: \"kubernetes.io/projected/2ceae97e-0cf6-4019-90ba-931df3f6dbed-kube-api-access-ftqbq\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.509126 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ceae97e-0cf6-4019-90ba-931df3f6dbed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.509145 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.509171 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.509224 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.510020 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-config-data-default\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.510246 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-kolla-config\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.510345 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2ceae97e-0cf6-4019-90ba-931df3f6dbed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.512261 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ceae97e-0cf6-4019-90ba-931df3f6dbed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.512496 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ceae97e-0cf6-4019-90ba-931df3f6dbed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.512557 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ceae97e-0cf6-4019-90ba-931df3f6dbed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.525360 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftqbq\" (UniqueName: \"kubernetes.io/projected/2ceae97e-0cf6-4019-90ba-931df3f6dbed-kube-api-access-ftqbq\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.543046 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"2ceae97e-0cf6-4019-90ba-931df3f6dbed\") " pod="openstack/openstack-galera-0" Jan 12 13:20:12 crc kubenswrapper[4580]: I0112 13:20:12.637808 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.658474 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.660279 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.661860 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xrbxp" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.663277 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.663456 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.663618 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.664503 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.823648 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.823711 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.823732 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.823750 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwlwg\" (UniqueName: \"kubernetes.io/projected/29452d40-93df-4c9f-9d79-70fbf3907de1-kube-api-access-nwlwg\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.823773 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29452d40-93df-4c9f-9d79-70fbf3907de1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.823797 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29452d40-93df-4c9f-9d79-70fbf3907de1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.823810 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29452d40-93df-4c9f-9d79-70fbf3907de1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.823850 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925168 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925225 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925293 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925309 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925329 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwlwg\" (UniqueName: \"kubernetes.io/projected/29452d40-93df-4c9f-9d79-70fbf3907de1-kube-api-access-nwlwg\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925360 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29452d40-93df-4c9f-9d79-70fbf3907de1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925392 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29452d40-93df-4c9f-9d79-70fbf3907de1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925409 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29452d40-93df-4c9f-9d79-70fbf3907de1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.925464 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.926729 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.926941 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.926945 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29452d40-93df-4c9f-9d79-70fbf3907de1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.927411 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29452d40-93df-4c9f-9d79-70fbf3907de1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.931340 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29452d40-93df-4c9f-9d79-70fbf3907de1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.931422 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29452d40-93df-4c9f-9d79-70fbf3907de1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.942807 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwlwg\" (UniqueName: \"kubernetes.io/projected/29452d40-93df-4c9f-9d79-70fbf3907de1-kube-api-access-nwlwg\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.944890 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29452d40-93df-4c9f-9d79-70fbf3907de1\") " pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.972713 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.978879 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.979658 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.981653 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nr58f" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.981802 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.981924 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 12 13:20:13 crc kubenswrapper[4580]: I0112 13:20:13.993280 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.127265 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjw2z\" (UniqueName: \"kubernetes.io/projected/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-kube-api-access-pjw2z\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.127305 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.127341 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.127365 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-config-data\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.127581 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-kolla-config\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.228659 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-kolla-config\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.228722 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjw2z\" (UniqueName: \"kubernetes.io/projected/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-kube-api-access-pjw2z\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.228764 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.228800 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.228823 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-config-data\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.229349 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-kolla-config\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.232355 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.232784 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.252125 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjw2z\" (UniqueName: \"kubernetes.io/projected/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-kube-api-access-pjw2z\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.252557 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c2b68c0-cf75-4b38-b7f5-c58b9f52e818-config-data\") pod \"memcached-0\" (UID: \"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818\") " pod="openstack/memcached-0" Jan 12 13:20:14 crc kubenswrapper[4580]: I0112 13:20:14.291698 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 12 13:20:15 crc kubenswrapper[4580]: I0112 13:20:15.668390 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:20:15 crc kubenswrapper[4580]: I0112 13:20:15.669423 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 12 13:20:15 crc kubenswrapper[4580]: I0112 13:20:15.671591 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sbcg7" Jan 12 13:20:15 crc kubenswrapper[4580]: I0112 13:20:15.681207 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:20:15 crc kubenswrapper[4580]: I0112 13:20:15.749186 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rmg\" (UniqueName: \"kubernetes.io/projected/2add073b-c55e-4910-a310-4ad61f763ed9-kube-api-access-x6rmg\") pod \"kube-state-metrics-0\" (UID: \"2add073b-c55e-4910-a310-4ad61f763ed9\") " pod="openstack/kube-state-metrics-0" Jan 12 13:20:15 crc kubenswrapper[4580]: I0112 13:20:15.850681 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6rmg\" (UniqueName: \"kubernetes.io/projected/2add073b-c55e-4910-a310-4ad61f763ed9-kube-api-access-x6rmg\") pod \"kube-state-metrics-0\" (UID: \"2add073b-c55e-4910-a310-4ad61f763ed9\") " pod="openstack/kube-state-metrics-0" Jan 12 13:20:15 crc kubenswrapper[4580]: I0112 13:20:15.864438 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6rmg\" (UniqueName: \"kubernetes.io/projected/2add073b-c55e-4910-a310-4ad61f763ed9-kube-api-access-x6rmg\") pod \"kube-state-metrics-0\" (UID: \"2add073b-c55e-4910-a310-4ad61f763ed9\") " pod="openstack/kube-state-metrics-0" Jan 12 13:20:15 crc kubenswrapper[4580]: I0112 13:20:15.988560 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 12 13:20:16 crc kubenswrapper[4580]: I0112 13:20:16.949693 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:20:16 crc kubenswrapper[4580]: I0112 13:20:16.949907 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:20:18 crc kubenswrapper[4580]: I0112 13:20:18.651641 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:20:18 crc kubenswrapper[4580]: I0112 13:20:18.948875 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.411288 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.415140 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-cpjhh"] Jan 12 13:20:19 crc kubenswrapper[4580]: W0112 13:20:19.415572 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20148d96_39b6_4278_9d29_91874ad352a0.slice/crio-50f5b488f68dbc1636d0b5fb334646b3801bd70073fe6cafe1a627b2deb23c59 WatchSource:0}: Error finding container 50f5b488f68dbc1636d0b5fb334646b3801bd70073fe6cafe1a627b2deb23c59: Status 404 returned error can't find the container with id 50f5b488f68dbc1636d0b5fb334646b3801bd70073fe6cafe1a627b2deb23c59 Jan 12 13:20:19 crc kubenswrapper[4580]: W0112 13:20:19.417034 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55248fd_efe4_466d_b3a3_fa9177008120.slice/crio-bf286ce80311e4269a21fd4a99481b618091a044f937c6471703f8600b5616af WatchSource:0}: Error finding container bf286ce80311e4269a21fd4a99481b618091a044f937c6471703f8600b5616af: Status 404 returned error can't find the container with id bf286ce80311e4269a21fd4a99481b618091a044f937c6471703f8600b5616af Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.479082 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.483802 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.487839 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.491526 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-wqfbd"] Jan 12 13:20:19 crc kubenswrapper[4580]: W0112 13:20:19.496166 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2add073b_c55e_4910_a310_4ad61f763ed9.slice/crio-a1fe86513259d17268ad2a209b35ef779d3bc0432776ca9d0e79249685d24ec0 WatchSource:0}: Error finding container a1fe86513259d17268ad2a209b35ef779d3bc0432776ca9d0e79249685d24ec0: Status 404 returned error can't find the container with id a1fe86513259d17268ad2a209b35ef779d3bc0432776ca9d0e79249685d24ec0 Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.600754 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 12 13:20:19 crc kubenswrapper[4580]: W0112 13:20:19.662199 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29452d40_93df_4c9f_9d79_70fbf3907de1.slice/crio-098648d81484e6cfdfc949d5fc255a824e9db1ae46b51dde6c44acb1726233f3 WatchSource:0}: Error finding container 098648d81484e6cfdfc949d5fc255a824e9db1ae46b51dde6c44acb1726233f3: Status 404 returned error can't find the container with id 098648d81484e6cfdfc949d5fc255a824e9db1ae46b51dde6c44acb1726233f3 Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.858129 4580 generic.go:334] "Generic (PLEG): container finished" podID="43e36388-f2bb-4ecb-802a-24f2b97d34b0" containerID="e5d71fa0e48db94a5701cebed02a2fbd81844c5486f14bc4c5db6188bf656506" exitCode=0 Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.858188 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-t7628" event={"ID":"43e36388-f2bb-4ecb-802a-24f2b97d34b0","Type":"ContainerDied","Data":"e5d71fa0e48db94a5701cebed02a2fbd81844c5486f14bc4c5db6188bf656506"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.859571 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20148d96-39b6-4278-9d29-91874ad352a0","Type":"ContainerStarted","Data":"50f5b488f68dbc1636d0b5fb334646b3801bd70073fe6cafe1a627b2deb23c59"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.861417 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818","Type":"ContainerStarted","Data":"d4b185cf6553340b9e059d75b006a6816dfe5f97f16e5fead11df56aa4c10b4e"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.870389 4580 generic.go:334] "Generic (PLEG): container finished" podID="b55248fd-efe4-466d-b3a3-fa9177008120" containerID="6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6" exitCode=0 Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.870441 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" event={"ID":"b55248fd-efe4-466d-b3a3-fa9177008120","Type":"ContainerDied","Data":"6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.870457 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" event={"ID":"b55248fd-efe4-466d-b3a3-fa9177008120","Type":"ContainerStarted","Data":"bf286ce80311e4269a21fd4a99481b618091a044f937c6471703f8600b5616af"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.883666 4580 generic.go:334] "Generic (PLEG): container finished" podID="37118f18-81ad-44c8-bc5e-8e8f3333193f" containerID="71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7" exitCode=0 Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.883787 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" event={"ID":"37118f18-81ad-44c8-bc5e-8e8f3333193f","Type":"ContainerDied","Data":"71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.883818 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" event={"ID":"37118f18-81ad-44c8-bc5e-8e8f3333193f","Type":"ContainerStarted","Data":"eeb7c891d471eb7c2b104593b119d19857ea38d97ca5a291e1fdaedcacb8ae15"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.902891 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ceae97e-0cf6-4019-90ba-931df3f6dbed","Type":"ContainerStarted","Data":"b86f5996640d6a6343067cdde40510665c159b8a3472a411ad93ecc86b347c71"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.943723 4580 generic.go:334] "Generic (PLEG): container finished" podID="210f1093-7755-4bc6-ab92-d7f769b8b5d1" containerID="d3388fa910ec097020a6c109f118ab55f6c32d1d314d9cf9942d93cc39e43a53" exitCode=0 Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.943797 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" event={"ID":"210f1093-7755-4bc6-ab92-d7f769b8b5d1","Type":"ContainerDied","Data":"d3388fa910ec097020a6c109f118ab55f6c32d1d314d9cf9942d93cc39e43a53"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.950464 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2add073b-c55e-4910-a310-4ad61f763ed9","Type":"ContainerStarted","Data":"a1fe86513259d17268ad2a209b35ef779d3bc0432776ca9d0e79249685d24ec0"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.951395 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29452d40-93df-4c9f-9d79-70fbf3907de1","Type":"ContainerStarted","Data":"098648d81484e6cfdfc949d5fc255a824e9db1ae46b51dde6c44acb1726233f3"} Jan 12 13:20:19 crc kubenswrapper[4580]: I0112 13:20:19.952131 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ee1d970-f295-46eb-91eb-70a45cb019c1","Type":"ContainerStarted","Data":"3ee7ccc08d6d3f74a64e0ea4b5c6c8b94eea1d3df6aed27a38ad313194c05745"} Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.125908 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tbpzb"] Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.126846 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.128592 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-z6bxz" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.130452 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.130688 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.132382 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tbpzb"] Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.179061 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-66wld"] Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.180725 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.189139 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-66wld"] Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.211798 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29dabf99-ffd5-4d31-b9e5-b10e192f239d-scripts\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.211841 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-run-ovn\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.211861 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29dabf99-ffd5-4d31-b9e5-b10e192f239d-combined-ca-bundle\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.211911 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-run\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.211962 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29dabf99-ffd5-4d31-b9e5-b10e192f239d-ovn-controller-tls-certs\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.211990 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2d9\" (UniqueName: \"kubernetes.io/projected/29dabf99-ffd5-4d31-b9e5-b10e192f239d-kube-api-access-hk2d9\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.212025 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-log-ovn\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.277434 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313146 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b20197ec-909c-4343-a0ed-e99b88ea6f83-scripts\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313184 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-etc-ovs\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313199 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-log\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313219 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29dabf99-ffd5-4d31-b9e5-b10e192f239d-scripts\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313240 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-run-ovn\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313256 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29dabf99-ffd5-4d31-b9e5-b10e192f239d-combined-ca-bundle\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313278 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-run\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313298 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-lib\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313334 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-run\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313348 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29dabf99-ffd5-4d31-b9e5-b10e192f239d-ovn-controller-tls-certs\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313369 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2d9\" (UniqueName: \"kubernetes.io/projected/29dabf99-ffd5-4d31-b9e5-b10e192f239d-kube-api-access-hk2d9\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313384 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-log-ovn\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.313407 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5895t\" (UniqueName: \"kubernetes.io/projected/b20197ec-909c-4343-a0ed-e99b88ea6f83-kube-api-access-5895t\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.317263 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29dabf99-ffd5-4d31-b9e5-b10e192f239d-scripts\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.317566 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-run-ovn\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.318029 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-run\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.318170 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29dabf99-ffd5-4d31-b9e5-b10e192f239d-var-log-ovn\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.330927 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29dabf99-ffd5-4d31-b9e5-b10e192f239d-ovn-controller-tls-certs\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.330954 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29dabf99-ffd5-4d31-b9e5-b10e192f239d-combined-ca-bundle\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.334226 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2d9\" (UniqueName: \"kubernetes.io/projected/29dabf99-ffd5-4d31-b9e5-b10e192f239d-kube-api-access-hk2d9\") pod \"ovn-controller-tbpzb\" (UID: \"29dabf99-ffd5-4d31-b9e5-b10e192f239d\") " pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.414809 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-dns-svc\") pod \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415149 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf8g9\" (UniqueName: \"kubernetes.io/projected/43e36388-f2bb-4ecb-802a-24f2b97d34b0-kube-api-access-tf8g9\") pod \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415179 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-config\") pod \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\" (UID: \"43e36388-f2bb-4ecb-802a-24f2b97d34b0\") " Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415404 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-run\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415465 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5895t\" (UniqueName: \"kubernetes.io/projected/b20197ec-909c-4343-a0ed-e99b88ea6f83-kube-api-access-5895t\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415505 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b20197ec-909c-4343-a0ed-e99b88ea6f83-scripts\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415536 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-etc-ovs\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415554 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-log\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415607 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-lib\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415690 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-run\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415797 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-log\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.415852 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-var-lib\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.416166 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b20197ec-909c-4343-a0ed-e99b88ea6f83-etc-ovs\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.425293 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b20197ec-909c-4343-a0ed-e99b88ea6f83-scripts\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.427362 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e36388-f2bb-4ecb-802a-24f2b97d34b0-kube-api-access-tf8g9" (OuterVolumeSpecName: "kube-api-access-tf8g9") pod "43e36388-f2bb-4ecb-802a-24f2b97d34b0" (UID: "43e36388-f2bb-4ecb-802a-24f2b97d34b0"). InnerVolumeSpecName "kube-api-access-tf8g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.427903 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-config" (OuterVolumeSpecName: "config") pod "43e36388-f2bb-4ecb-802a-24f2b97d34b0" (UID: "43e36388-f2bb-4ecb-802a-24f2b97d34b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.428910 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5895t\" (UniqueName: \"kubernetes.io/projected/b20197ec-909c-4343-a0ed-e99b88ea6f83-kube-api-access-5895t\") pod \"ovn-controller-ovs-66wld\" (UID: \"b20197ec-909c-4343-a0ed-e99b88ea6f83\") " pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.442452 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43e36388-f2bb-4ecb-802a-24f2b97d34b0" (UID: "43e36388-f2bb-4ecb-802a-24f2b97d34b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.481351 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.504931 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.517299 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.517329 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf8g9\" (UniqueName: \"kubernetes.io/projected/43e36388-f2bb-4ecb-802a-24f2b97d34b0-kube-api-access-tf8g9\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.517345 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e36388-f2bb-4ecb-802a-24f2b97d34b0-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.747690 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.940053 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210f1093-7755-4bc6-ab92-d7f769b8b5d1-config\") pod \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\" (UID: \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\") " Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.940460 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sr6b\" (UniqueName: \"kubernetes.io/projected/210f1093-7755-4bc6-ab92-d7f769b8b5d1-kube-api-access-4sr6b\") pod \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\" (UID: \"210f1093-7755-4bc6-ab92-d7f769b8b5d1\") " Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.944737 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210f1093-7755-4bc6-ab92-d7f769b8b5d1-kube-api-access-4sr6b" (OuterVolumeSpecName: "kube-api-access-4sr6b") pod "210f1093-7755-4bc6-ab92-d7f769b8b5d1" (UID: "210f1093-7755-4bc6-ab92-d7f769b8b5d1"). InnerVolumeSpecName "kube-api-access-4sr6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.957534 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210f1093-7755-4bc6-ab92-d7f769b8b5d1-config" (OuterVolumeSpecName: "config") pod "210f1093-7755-4bc6-ab92-d7f769b8b5d1" (UID: "210f1093-7755-4bc6-ab92-d7f769b8b5d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.968648 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-t7628" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.969762 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-t7628" event={"ID":"43e36388-f2bb-4ecb-802a-24f2b97d34b0","Type":"ContainerDied","Data":"69c620af22a81beb84d78977480abb7b5cb6ff55a0db14976e60a3472a80dee4"} Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.969803 4580 scope.go:117] "RemoveContainer" containerID="e5d71fa0e48db94a5701cebed02a2fbd81844c5486f14bc4c5db6188bf656506" Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.977454 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" event={"ID":"210f1093-7755-4bc6-ab92-d7f769b8b5d1","Type":"ContainerDied","Data":"e240f2a095bcc8c9a7ec7a3e39ae893bf85203612b2351bfeab7ceeb265398c5"} Jan 12 13:20:20 crc kubenswrapper[4580]: I0112 13:20:20.977542 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-djs49" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.007561 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" event={"ID":"37118f18-81ad-44c8-bc5e-8e8f3333193f","Type":"ContainerStarted","Data":"9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6"} Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.007695 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.013568 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" event={"ID":"b55248fd-efe4-466d-b3a3-fa9177008120","Type":"ContainerStarted","Data":"e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061"} Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.013948 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.025881 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-t7628"] Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.027960 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-t7628"] Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.036038 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" podStartSLOduration=11.036025972 podStartE2EDuration="11.036025972s" podCreationTimestamp="2026-01-12 13:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:20:21.032542267 +0000 UTC m=+820.076760956" watchObservedRunningTime="2026-01-12 13:20:21.036025972 +0000 UTC m=+820.080244661" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.041550 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sr6b\" (UniqueName: \"kubernetes.io/projected/210f1093-7755-4bc6-ab92-d7f769b8b5d1-kube-api-access-4sr6b\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.041574 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210f1093-7755-4bc6-ab92-d7f769b8b5d1-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.065053 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-djs49"] Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.074386 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-djs49"] Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.076970 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" podStartSLOduration=12.076952518 podStartE2EDuration="12.076952518s" podCreationTimestamp="2026-01-12 13:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:20:21.063573209 +0000 UTC m=+820.107791899" watchObservedRunningTime="2026-01-12 13:20:21.076952518 +0000 UTC m=+820.121171208" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.232677 4580 scope.go:117] "RemoveContainer" containerID="d3388fa910ec097020a6c109f118ab55f6c32d1d314d9cf9942d93cc39e43a53" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.296863 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210f1093-7755-4bc6-ab92-d7f769b8b5d1" path="/var/lib/kubelet/pods/210f1093-7755-4bc6-ab92-d7f769b8b5d1/volumes" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.299401 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e36388-f2bb-4ecb-802a-24f2b97d34b0" path="/var/lib/kubelet/pods/43e36388-f2bb-4ecb-802a-24f2b97d34b0/volumes" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.593809 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tbpzb"] Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.611752 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-66wld"] Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.990744 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 12 13:20:21 crc kubenswrapper[4580]: E0112 13:20:21.993052 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210f1093-7755-4bc6-ab92-d7f769b8b5d1" containerName="init" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.993239 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="210f1093-7755-4bc6-ab92-d7f769b8b5d1" containerName="init" Jan 12 13:20:21 crc kubenswrapper[4580]: E0112 13:20:21.993431 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e36388-f2bb-4ecb-802a-24f2b97d34b0" containerName="init" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.993495 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e36388-f2bb-4ecb-802a-24f2b97d34b0" containerName="init" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.993848 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e36388-f2bb-4ecb-802a-24f2b97d34b0" containerName="init" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.993915 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="210f1093-7755-4bc6-ab92-d7f769b8b5d1" containerName="init" Jan 12 13:20:21 crc kubenswrapper[4580]: I0112 13:20:21.997349 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.000150 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.000274 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.000419 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.001386 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.001386 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gpk2k" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.001573 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 12 13:20:22 crc kubenswrapper[4580]: W0112 13:20:22.097891 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29dabf99_ffd5_4d31_b9e5_b10e192f239d.slice/crio-20b4c05a87f05e6c9205aafbed444b1c56ff1da95cf6611590246f30dc3004d4 WatchSource:0}: Error finding container 20b4c05a87f05e6c9205aafbed444b1c56ff1da95cf6611590246f30dc3004d4: Status 404 returned error can't find the container with id 20b4c05a87f05e6c9205aafbed444b1c56ff1da95cf6611590246f30dc3004d4 Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.161663 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.161721 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.161768 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-config\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.161976 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.162026 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.162049 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.162581 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.162612 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7rv6\" (UniqueName: \"kubernetes.io/projected/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-kube-api-access-k7rv6\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.217812 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.218878 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.221512 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zhmzp" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.221730 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.221887 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.222182 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266155 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266222 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266278 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-config\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266369 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266403 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266421 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266466 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266492 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7rv6\" (UniqueName: \"kubernetes.io/projected/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-kube-api-access-k7rv6\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.266715 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.267132 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.272466 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-config\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.276000 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.277524 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.278154 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.279836 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.280371 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.288664 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7rv6\" (UniqueName: \"kubernetes.io/projected/396e4fc0-cb2e-4543-b1ae-d61eec6a365a-kube-api-access-k7rv6\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.306905 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-24mw4"] Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.308043 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.308672 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"396e4fc0-cb2e-4543-b1ae-d61eec6a365a\") " pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.311061 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.314318 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.318409 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-24mw4"] Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.367886 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c58bf4-8891-45e6-9be6-a3176eefbc14-config\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.367941 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6c58bf4-8891-45e6-9be6-a3176eefbc14-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.368007 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.368034 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.368060 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2pp9\" (UniqueName: \"kubernetes.io/projected/a6c58bf4-8891-45e6-9be6-a3176eefbc14-kube-api-access-x2pp9\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.368088 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.368149 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6c58bf4-8891-45e6-9be6-a3176eefbc14-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.368171 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.469864 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2pp9\" (UniqueName: \"kubernetes.io/projected/a6c58bf4-8891-45e6-9be6-a3176eefbc14-kube-api-access-x2pp9\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.469910 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.469933 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.469956 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvrh7\" (UniqueName: \"kubernetes.io/projected/8791a7b2-1c8a-4551-94d2-379d8a7aa153-kube-api-access-vvrh7\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.469984 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8791a7b2-1c8a-4551-94d2-379d8a7aa153-ovn-rundir\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470000 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8791a7b2-1c8a-4551-94d2-379d8a7aa153-combined-ca-bundle\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470027 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6c58bf4-8891-45e6-9be6-a3176eefbc14-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470045 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470063 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470087 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6c58bf4-8891-45e6-9be6-a3176eefbc14-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470135 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8791a7b2-1c8a-4551-94d2-379d8a7aa153-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470153 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8791a7b2-1c8a-4551-94d2-379d8a7aa153-config\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470169 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8791a7b2-1c8a-4551-94d2-379d8a7aa153-ovs-rundir\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470189 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c58bf4-8891-45e6-9be6-a3176eefbc14-config\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.470875 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c58bf4-8891-45e6-9be6-a3176eefbc14-config\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.471711 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.471839 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6c58bf4-8891-45e6-9be6-a3176eefbc14-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.472932 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6c58bf4-8891-45e6-9be6-a3176eefbc14-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.476617 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.477546 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.485700 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c58bf4-8891-45e6-9be6-a3176eefbc14-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.495401 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2pp9\" (UniqueName: \"kubernetes.io/projected/a6c58bf4-8891-45e6-9be6-a3176eefbc14-kube-api-access-x2pp9\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.500457 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a6c58bf4-8891-45e6-9be6-a3176eefbc14\") " pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.561821 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.571455 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvrh7\" (UniqueName: \"kubernetes.io/projected/8791a7b2-1c8a-4551-94d2-379d8a7aa153-kube-api-access-vvrh7\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.571505 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8791a7b2-1c8a-4551-94d2-379d8a7aa153-ovn-rundir\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.571522 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8791a7b2-1c8a-4551-94d2-379d8a7aa153-combined-ca-bundle\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.571585 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8791a7b2-1c8a-4551-94d2-379d8a7aa153-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.571602 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8791a7b2-1c8a-4551-94d2-379d8a7aa153-config\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.571620 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8791a7b2-1c8a-4551-94d2-379d8a7aa153-ovs-rundir\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.571826 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8791a7b2-1c8a-4551-94d2-379d8a7aa153-ovs-rundir\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.572435 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8791a7b2-1c8a-4551-94d2-379d8a7aa153-ovn-rundir\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.573207 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8791a7b2-1c8a-4551-94d2-379d8a7aa153-config\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.575311 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8791a7b2-1c8a-4551-94d2-379d8a7aa153-combined-ca-bundle\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.588699 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvrh7\" (UniqueName: \"kubernetes.io/projected/8791a7b2-1c8a-4551-94d2-379d8a7aa153-kube-api-access-vvrh7\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.595280 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8791a7b2-1c8a-4551-94d2-379d8a7aa153-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-24mw4\" (UID: \"8791a7b2-1c8a-4551-94d2-379d8a7aa153\") " pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:22 crc kubenswrapper[4580]: I0112 13:20:22.645064 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-24mw4" Jan 12 13:20:23 crc kubenswrapper[4580]: I0112 13:20:23.029173 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-66wld" event={"ID":"b20197ec-909c-4343-a0ed-e99b88ea6f83","Type":"ContainerStarted","Data":"87c57593de94a1cf971932722f58e5c43806125b6fcf77fcc0897c00b8d70165"} Jan 12 13:20:23 crc kubenswrapper[4580]: I0112 13:20:23.030164 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tbpzb" event={"ID":"29dabf99-ffd5-4d31-b9e5-b10e192f239d","Type":"ContainerStarted","Data":"20b4c05a87f05e6c9205aafbed444b1c56ff1da95cf6611590246f30dc3004d4"} Jan 12 13:20:25 crc kubenswrapper[4580]: I0112 13:20:25.201243 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:25 crc kubenswrapper[4580]: I0112 13:20:25.444188 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:25 crc kubenswrapper[4580]: I0112 13:20:25.499512 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-wqfbd"] Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.053787 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" podUID="37118f18-81ad-44c8-bc5e-8e8f3333193f" containerName="dnsmasq-dns" containerID="cri-o://9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6" gracePeriod=10 Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.355993 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.364746 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.395149 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-24mw4"] Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.426589 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-dns-svc\") pod \"37118f18-81ad-44c8-bc5e-8e8f3333193f\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.426701 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdw5s\" (UniqueName: \"kubernetes.io/projected/37118f18-81ad-44c8-bc5e-8e8f3333193f-kube-api-access-wdw5s\") pod \"37118f18-81ad-44c8-bc5e-8e8f3333193f\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.426769 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-config\") pod \"37118f18-81ad-44c8-bc5e-8e8f3333193f\" (UID: \"37118f18-81ad-44c8-bc5e-8e8f3333193f\") " Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.533870 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37118f18-81ad-44c8-bc5e-8e8f3333193f-kube-api-access-wdw5s" (OuterVolumeSpecName: "kube-api-access-wdw5s") pod "37118f18-81ad-44c8-bc5e-8e8f3333193f" (UID: "37118f18-81ad-44c8-bc5e-8e8f3333193f"). InnerVolumeSpecName "kube-api-access-wdw5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.540778 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 12 13:20:26 crc kubenswrapper[4580]: W0112 13:20:26.628845 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c58bf4_8891_45e6_9be6_a3176eefbc14.slice/crio-a7ef68f63a818c526f0db656af52523f080da2ebd0e35d2de0d304c00247f0e0 WatchSource:0}: Error finding container a7ef68f63a818c526f0db656af52523f080da2ebd0e35d2de0d304c00247f0e0: Status 404 returned error can't find the container with id a7ef68f63a818c526f0db656af52523f080da2ebd0e35d2de0d304c00247f0e0 Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.629786 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdw5s\" (UniqueName: \"kubernetes.io/projected/37118f18-81ad-44c8-bc5e-8e8f3333193f-kube-api-access-wdw5s\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.653784 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-config" (OuterVolumeSpecName: "config") pod "37118f18-81ad-44c8-bc5e-8e8f3333193f" (UID: "37118f18-81ad-44c8-bc5e-8e8f3333193f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.654220 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37118f18-81ad-44c8-bc5e-8e8f3333193f" (UID: "37118f18-81ad-44c8-bc5e-8e8f3333193f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.731505 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:26 crc kubenswrapper[4580]: I0112 13:20:26.731530 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37118f18-81ad-44c8-bc5e-8e8f3333193f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.063725 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a6c58bf4-8891-45e6-9be6-a3176eefbc14","Type":"ContainerStarted","Data":"a7ef68f63a818c526f0db656af52523f080da2ebd0e35d2de0d304c00247f0e0"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.064851 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-24mw4" event={"ID":"8791a7b2-1c8a-4551-94d2-379d8a7aa153","Type":"ContainerStarted","Data":"a5b5d6e1678bfb4264e535bc22bd308748b62b102214c097f33871f162422a02"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.066359 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ceae97e-0cf6-4019-90ba-931df3f6dbed","Type":"ContainerStarted","Data":"359bfeb7ec5bbdf750804bcea6d1c4d75ce12a1780bf065e68f5415b94903c0c"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.067741 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20148d96-39b6-4278-9d29-91874ad352a0","Type":"ContainerStarted","Data":"a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.068842 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ee1d970-f295-46eb-91eb-70a45cb019c1","Type":"ContainerStarted","Data":"ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.070636 4580 generic.go:334] "Generic (PLEG): container finished" podID="37118f18-81ad-44c8-bc5e-8e8f3333193f" containerID="9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6" exitCode=0 Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.070686 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" event={"ID":"37118f18-81ad-44c8-bc5e-8e8f3333193f","Type":"ContainerDied","Data":"9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.070703 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" event={"ID":"37118f18-81ad-44c8-bc5e-8e8f3333193f","Type":"ContainerDied","Data":"eeb7c891d471eb7c2b104593b119d19857ea38d97ca5a291e1fdaedcacb8ae15"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.070720 4580 scope.go:117] "RemoveContainer" containerID="9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.070797 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-wqfbd" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.098330 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2add073b-c55e-4910-a310-4ad61f763ed9","Type":"ContainerStarted","Data":"371b63c6fb26bd523423da3db307f5844f7a2fdd13ae32bd440809e9702fab99"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.098450 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.103989 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29452d40-93df-4c9f-9d79-70fbf3907de1","Type":"ContainerStarted","Data":"e44bae0423651ae51f1a2aa3c84598fd646ccadc38bfecada63ab07f09091052"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.106236 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0c2b68c0-cf75-4b38-b7f5-c58b9f52e818","Type":"ContainerStarted","Data":"f909f509637d3c07ab67e1d8c4e85729d044dced303ec271c59ab0d75e64fce6"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.106318 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.110960 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"396e4fc0-cb2e-4543-b1ae-d61eec6a365a","Type":"ContainerStarted","Data":"3613aa292312a5015afda3ca50e0d347149d78072282127e98ae241ba31a3410"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.112945 4580 generic.go:334] "Generic (PLEG): container finished" podID="b20197ec-909c-4343-a0ed-e99b88ea6f83" containerID="165446eabfcdbda9cf908f41c4ba9a16d0952e741d3302c285e670ea68a52e2e" exitCode=0 Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.112995 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-66wld" event={"ID":"b20197ec-909c-4343-a0ed-e99b88ea6f83","Type":"ContainerDied","Data":"165446eabfcdbda9cf908f41c4ba9a16d0952e741d3302c285e670ea68a52e2e"} Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.118603 4580 scope.go:117] "RemoveContainer" containerID="71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.138122 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-wqfbd"] Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.141776 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-wqfbd"] Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.148548 4580 scope.go:117] "RemoveContainer" containerID="9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6" Jan 12 13:20:27 crc kubenswrapper[4580]: E0112 13:20:27.149432 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6\": container with ID starting with 9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6 not found: ID does not exist" containerID="9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.149465 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6"} err="failed to get container status \"9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6\": rpc error: code = NotFound desc = could not find container \"9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6\": container with ID starting with 9d50de8f3aa1eeef62958447a67922d7ce365c9a23a98e70cc219d5c6d08cea6 not found: ID does not exist" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.149486 4580 scope.go:117] "RemoveContainer" containerID="71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7" Jan 12 13:20:27 crc kubenswrapper[4580]: E0112 13:20:27.149866 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7\": container with ID starting with 71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7 not found: ID does not exist" containerID="71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.149911 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7"} err="failed to get container status \"71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7\": rpc error: code = NotFound desc = could not find container \"71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7\": container with ID starting with 71d2ef749fab7f73bf46d7aa5eadcb048d07259fab3745842cf6522914ec67f7 not found: ID does not exist" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.164709 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=8.795420397000001 podStartE2EDuration="14.164695344s" podCreationTimestamp="2026-01-12 13:20:13 +0000 UTC" firstStartedPulling="2026-01-12 13:20:19.499806439 +0000 UTC m=+818.544025130" lastFinishedPulling="2026-01-12 13:20:24.869081386 +0000 UTC m=+823.913300077" observedRunningTime="2026-01-12 13:20:27.155196206 +0000 UTC m=+826.199414897" watchObservedRunningTime="2026-01-12 13:20:27.164695344 +0000 UTC m=+826.208914034" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.215589 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.76476436 podStartE2EDuration="12.21557427s" podCreationTimestamp="2026-01-12 13:20:15 +0000 UTC" firstStartedPulling="2026-01-12 13:20:19.501917917 +0000 UTC m=+818.546136606" lastFinishedPulling="2026-01-12 13:20:25.952727825 +0000 UTC m=+824.996946516" observedRunningTime="2026-01-12 13:20:27.210208128 +0000 UTC m=+826.254426818" watchObservedRunningTime="2026-01-12 13:20:27.21557427 +0000 UTC m=+826.259792960" Jan 12 13:20:27 crc kubenswrapper[4580]: I0112 13:20:27.289807 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37118f18-81ad-44c8-bc5e-8e8f3333193f" path="/var/lib/kubelet/pods/37118f18-81ad-44c8-bc5e-8e8f3333193f/volumes" Jan 12 13:20:28 crc kubenswrapper[4580]: I0112 13:20:28.122244 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-66wld" event={"ID":"b20197ec-909c-4343-a0ed-e99b88ea6f83","Type":"ContainerStarted","Data":"a920d7f9e72cf95e01f85bbe758b0b1e81683bc410327f2304156ca84484556e"} Jan 12 13:20:28 crc kubenswrapper[4580]: I0112 13:20:28.122276 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-66wld" event={"ID":"b20197ec-909c-4343-a0ed-e99b88ea6f83","Type":"ContainerStarted","Data":"56f14eb3091c8435d31ead6eb12a2713bd2d2c6b36f8b351b650f370eec8b563"} Jan 12 13:20:28 crc kubenswrapper[4580]: I0112 13:20:28.122312 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:28 crc kubenswrapper[4580]: I0112 13:20:28.122345 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:20:28 crc kubenswrapper[4580]: I0112 13:20:28.146274 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-66wld" podStartSLOduration=4.252549704 podStartE2EDuration="8.146259853s" podCreationTimestamp="2026-01-12 13:20:20 +0000 UTC" firstStartedPulling="2026-01-12 13:20:22.105183507 +0000 UTC m=+821.149402197" lastFinishedPulling="2026-01-12 13:20:25.998893655 +0000 UTC m=+825.043112346" observedRunningTime="2026-01-12 13:20:28.142701256 +0000 UTC m=+827.186919946" watchObservedRunningTime="2026-01-12 13:20:28.146259853 +0000 UTC m=+827.190478542" Jan 12 13:20:30 crc kubenswrapper[4580]: I0112 13:20:30.139259 4580 generic.go:334] "Generic (PLEG): container finished" podID="29452d40-93df-4c9f-9d79-70fbf3907de1" containerID="e44bae0423651ae51f1a2aa3c84598fd646ccadc38bfecada63ab07f09091052" exitCode=0 Jan 12 13:20:30 crc kubenswrapper[4580]: I0112 13:20:30.139435 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29452d40-93df-4c9f-9d79-70fbf3907de1","Type":"ContainerDied","Data":"e44bae0423651ae51f1a2aa3c84598fd646ccadc38bfecada63ab07f09091052"} Jan 12 13:20:30 crc kubenswrapper[4580]: I0112 13:20:30.142641 4580 generic.go:334] "Generic (PLEG): container finished" podID="2ceae97e-0cf6-4019-90ba-931df3f6dbed" containerID="359bfeb7ec5bbdf750804bcea6d1c4d75ce12a1780bf065e68f5415b94903c0c" exitCode=0 Jan 12 13:20:30 crc kubenswrapper[4580]: I0112 13:20:30.142684 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ceae97e-0cf6-4019-90ba-931df3f6dbed","Type":"ContainerDied","Data":"359bfeb7ec5bbdf750804bcea6d1c4d75ce12a1780bf065e68f5415b94903c0c"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.149730 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2ceae97e-0cf6-4019-90ba-931df3f6dbed","Type":"ContainerStarted","Data":"9284c02d67859f800a3972dcc2083a0f836bce88b821a6ad34c6ac4911418fe7"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.151227 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"396e4fc0-cb2e-4543-b1ae-d61eec6a365a","Type":"ContainerStarted","Data":"1e78e34b1c1057ac56d8d30c153d8ea34dc3fbe3deda9607488e102069f451b1"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.151249 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"396e4fc0-cb2e-4543-b1ae-d61eec6a365a","Type":"ContainerStarted","Data":"9cc341731952f0405a45500193c7e5fad7034b55c7c0c2d40ae1ba0b3aa821cb"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.152576 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29452d40-93df-4c9f-9d79-70fbf3907de1","Type":"ContainerStarted","Data":"ef10e44f0f322374d84672947dffafab022f159baf4232667ceaa3b3f8ac5060"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.153860 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tbpzb" event={"ID":"29dabf99-ffd5-4d31-b9e5-b10e192f239d","Type":"ContainerStarted","Data":"19feab685e5379426bd49b42abfa8c35b7e137aca952513248aa70bf4fc9a811"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.153955 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-tbpzb" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.155498 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a6c58bf4-8891-45e6-9be6-a3176eefbc14","Type":"ContainerStarted","Data":"9e8ec002f0d8c4f6a13e6b224f0bf40c5baed91708302d00a1f6409569441d49"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.155519 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a6c58bf4-8891-45e6-9be6-a3176eefbc14","Type":"ContainerStarted","Data":"852f9859cd639e50b9432f8566dc616ee6169df34b32a915eb860ac3c95ba09f"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.156542 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-24mw4" event={"ID":"8791a7b2-1c8a-4551-94d2-379d8a7aa153","Type":"ContainerStarted","Data":"2262a4f7b8c735c7cc53181c97183ed526e539a5ccc4a538d33746d0b75ed803"} Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.166715 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=13.714160288 podStartE2EDuration="20.166706638s" podCreationTimestamp="2026-01-12 13:20:11 +0000 UTC" firstStartedPulling="2026-01-12 13:20:19.499166657 +0000 UTC m=+818.543385348" lastFinishedPulling="2026-01-12 13:20:25.951713008 +0000 UTC m=+824.995931698" observedRunningTime="2026-01-12 13:20:31.1633757 +0000 UTC m=+830.207594390" watchObservedRunningTime="2026-01-12 13:20:31.166706638 +0000 UTC m=+830.210925328" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.176772 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.949401357 podStartE2EDuration="10.176757873s" podCreationTimestamp="2026-01-12 13:20:21 +0000 UTC" firstStartedPulling="2026-01-12 13:20:26.630959937 +0000 UTC m=+825.675178626" lastFinishedPulling="2026-01-12 13:20:29.858316452 +0000 UTC m=+828.902535142" observedRunningTime="2026-01-12 13:20:31.175614195 +0000 UTC m=+830.219832895" watchObservedRunningTime="2026-01-12 13:20:31.176757873 +0000 UTC m=+830.220976563" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.186250 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-tbpzb" podStartSLOduration=3.450419101 podStartE2EDuration="11.186242113s" podCreationTimestamp="2026-01-12 13:20:20 +0000 UTC" firstStartedPulling="2026-01-12 13:20:22.102788177 +0000 UTC m=+821.147006866" lastFinishedPulling="2026-01-12 13:20:29.838611189 +0000 UTC m=+828.882829878" observedRunningTime="2026-01-12 13:20:31.185324289 +0000 UTC m=+830.229542979" watchObservedRunningTime="2026-01-12 13:20:31.186242113 +0000 UTC m=+830.230460803" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.200818 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.919983076 podStartE2EDuration="19.200801991s" podCreationTimestamp="2026-01-12 13:20:12 +0000 UTC" firstStartedPulling="2026-01-12 13:20:19.665243486 +0000 UTC m=+818.709462177" lastFinishedPulling="2026-01-12 13:20:25.946062402 +0000 UTC m=+824.990281092" observedRunningTime="2026-01-12 13:20:31.198860302 +0000 UTC m=+830.243078991" watchObservedRunningTime="2026-01-12 13:20:31.200801991 +0000 UTC m=+830.245020680" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.214076 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.757845455 podStartE2EDuration="11.214060471s" podCreationTimestamp="2026-01-12 13:20:20 +0000 UTC" firstStartedPulling="2026-01-12 13:20:26.382398678 +0000 UTC m=+825.426617367" lastFinishedPulling="2026-01-12 13:20:29.838613693 +0000 UTC m=+828.882832383" observedRunningTime="2026-01-12 13:20:31.211990792 +0000 UTC m=+830.256209483" watchObservedRunningTime="2026-01-12 13:20:31.214060471 +0000 UTC m=+830.258279162" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.223923 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-24mw4" podStartSLOduration=5.771205608 podStartE2EDuration="9.223910628s" podCreationTimestamp="2026-01-12 13:20:22 +0000 UTC" firstStartedPulling="2026-01-12 13:20:26.413772285 +0000 UTC m=+825.457990975" lastFinishedPulling="2026-01-12 13:20:29.866477305 +0000 UTC m=+828.910695995" observedRunningTime="2026-01-12 13:20:31.222421 +0000 UTC m=+830.266639690" watchObservedRunningTime="2026-01-12 13:20:31.223910628 +0000 UTC m=+830.268129318" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.314804 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.561862 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.603599 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878659675-rx7rj"] Jan 12 13:20:31 crc kubenswrapper[4580]: E0112 13:20:31.603917 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37118f18-81ad-44c8-bc5e-8e8f3333193f" containerName="init" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.603935 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="37118f18-81ad-44c8-bc5e-8e8f3333193f" containerName="init" Jan 12 13:20:31 crc kubenswrapper[4580]: E0112 13:20:31.603965 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37118f18-81ad-44c8-bc5e-8e8f3333193f" containerName="dnsmasq-dns" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.603971 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="37118f18-81ad-44c8-bc5e-8e8f3333193f" containerName="dnsmasq-dns" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.604163 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="37118f18-81ad-44c8-bc5e-8e8f3333193f" containerName="dnsmasq-dns" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.611185 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-rx7rj"] Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.611260 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.614051 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.702516 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-dns-svc\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.702564 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbd78\" (UniqueName: \"kubernetes.io/projected/b9c33063-8d34-491c-af61-404efd47a67d-kube-api-access-kbd78\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.702758 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.702814 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-config\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.804585 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-dns-svc\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.804642 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbd78\" (UniqueName: \"kubernetes.io/projected/b9c33063-8d34-491c-af61-404efd47a67d-kube-api-access-kbd78\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.804703 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.804753 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-config\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.805559 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-dns-svc\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.805568 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-config\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.805645 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.820292 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbd78\" (UniqueName: \"kubernetes.io/projected/b9c33063-8d34-491c-af61-404efd47a67d-kube-api-access-kbd78\") pod \"dnsmasq-dns-7878659675-rx7rj\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.830346 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-rx7rj"] Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.830853 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.848852 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-nqxcw"] Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.856521 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.858938 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 12 13:20:31 crc kubenswrapper[4580]: I0112 13:20:31.859957 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-nqxcw"] Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.007295 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rh49\" (UniqueName: \"kubernetes.io/projected/9386d106-dcfa-4440-98f4-512c13158ae9-kube-api-access-8rh49\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.007550 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.007578 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-config\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.007597 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.007630 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-dns-svc\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.108356 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.108402 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-config\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.108420 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.108454 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-dns-svc\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.108509 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rh49\" (UniqueName: \"kubernetes.io/projected/9386d106-dcfa-4440-98f4-512c13158ae9-kube-api-access-8rh49\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.109337 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-dns-svc\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.109337 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.109376 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.109766 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-config\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.123033 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rh49\" (UniqueName: \"kubernetes.io/projected/9386d106-dcfa-4440-98f4-512c13158ae9-kube-api-access-8rh49\") pod \"dnsmasq-dns-586b989cdc-nqxcw\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.201171 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.249716 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-rx7rj"] Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.315410 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.562394 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.610226 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-nqxcw"] Jan 12 13:20:32 crc kubenswrapper[4580]: W0112 13:20:32.614352 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9386d106_dcfa_4440_98f4_512c13158ae9.slice/crio-8f1bbe2b8fca184a81b80ed09d225a573b1baefa694a18d63da343dfeec124b2 WatchSource:0}: Error finding container 8f1bbe2b8fca184a81b80ed09d225a573b1baefa694a18d63da343dfeec124b2: Status 404 returned error can't find the container with id 8f1bbe2b8fca184a81b80ed09d225a573b1baefa694a18d63da343dfeec124b2 Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.638643 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 12 13:20:32 crc kubenswrapper[4580]: I0112 13:20:32.638697 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.169036 4580 generic.go:334] "Generic (PLEG): container finished" podID="b9c33063-8d34-491c-af61-404efd47a67d" containerID="81fd3f87ec1f0ad56ad4ef17994b2b5b53ae7132b2cec078aa9843253e949982" exitCode=0 Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.169084 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-rx7rj" event={"ID":"b9c33063-8d34-491c-af61-404efd47a67d","Type":"ContainerDied","Data":"81fd3f87ec1f0ad56ad4ef17994b2b5b53ae7132b2cec078aa9843253e949982"} Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.169318 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-rx7rj" event={"ID":"b9c33063-8d34-491c-af61-404efd47a67d","Type":"ContainerStarted","Data":"c903848e856b2b70ba841ac02817616dc03061015beabcc06a8efcdeb4215d6d"} Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.170352 4580 generic.go:334] "Generic (PLEG): container finished" podID="9386d106-dcfa-4440-98f4-512c13158ae9" containerID="1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685" exitCode=0 Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.171251 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" event={"ID":"9386d106-dcfa-4440-98f4-512c13158ae9","Type":"ContainerDied","Data":"1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685"} Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.171280 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" event={"ID":"9386d106-dcfa-4440-98f4-512c13158ae9","Type":"ContainerStarted","Data":"8f1bbe2b8fca184a81b80ed09d225a573b1baefa694a18d63da343dfeec124b2"} Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.406733 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.529476 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbd78\" (UniqueName: \"kubernetes.io/projected/b9c33063-8d34-491c-af61-404efd47a67d-kube-api-access-kbd78\") pod \"b9c33063-8d34-491c-af61-404efd47a67d\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.529566 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-ovsdbserver-nb\") pod \"b9c33063-8d34-491c-af61-404efd47a67d\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.530097 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-config\") pod \"b9c33063-8d34-491c-af61-404efd47a67d\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.530165 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-dns-svc\") pod \"b9c33063-8d34-491c-af61-404efd47a67d\" (UID: \"b9c33063-8d34-491c-af61-404efd47a67d\") " Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.533454 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c33063-8d34-491c-af61-404efd47a67d-kube-api-access-kbd78" (OuterVolumeSpecName: "kube-api-access-kbd78") pod "b9c33063-8d34-491c-af61-404efd47a67d" (UID: "b9c33063-8d34-491c-af61-404efd47a67d"). InnerVolumeSpecName "kube-api-access-kbd78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.544072 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b9c33063-8d34-491c-af61-404efd47a67d" (UID: "b9c33063-8d34-491c-af61-404efd47a67d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.544507 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b9c33063-8d34-491c-af61-404efd47a67d" (UID: "b9c33063-8d34-491c-af61-404efd47a67d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.544633 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-config" (OuterVolumeSpecName: "config") pod "b9c33063-8d34-491c-af61-404efd47a67d" (UID: "b9c33063-8d34-491c-af61-404efd47a67d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.632456 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.632482 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.632490 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9c33063-8d34-491c-af61-404efd47a67d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.632499 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbd78\" (UniqueName: \"kubernetes.io/projected/b9c33063-8d34-491c-af61-404efd47a67d-kube-api-access-kbd78\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.973014 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:33 crc kubenswrapper[4580]: I0112 13:20:33.973063 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:34 crc kubenswrapper[4580]: I0112 13:20:34.177670 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-rx7rj" event={"ID":"b9c33063-8d34-491c-af61-404efd47a67d","Type":"ContainerDied","Data":"c903848e856b2b70ba841ac02817616dc03061015beabcc06a8efcdeb4215d6d"} Jan 12 13:20:34 crc kubenswrapper[4580]: I0112 13:20:34.177950 4580 scope.go:117] "RemoveContainer" containerID="81fd3f87ec1f0ad56ad4ef17994b2b5b53ae7132b2cec078aa9843253e949982" Jan 12 13:20:34 crc kubenswrapper[4580]: I0112 13:20:34.177740 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-rx7rj" Jan 12 13:20:34 crc kubenswrapper[4580]: I0112 13:20:34.231434 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-rx7rj"] Jan 12 13:20:34 crc kubenswrapper[4580]: I0112 13:20:34.239595 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878659675-rx7rj"] Jan 12 13:20:34 crc kubenswrapper[4580]: I0112 13:20:34.293765 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 12 13:20:34 crc kubenswrapper[4580]: I0112 13:20:34.361784 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:34 crc kubenswrapper[4580]: I0112 13:20:34.587608 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.209848 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.210024 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.288519 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c33063-8d34-491c-af61-404efd47a67d" path="/var/lib/kubelet/pods/b9c33063-8d34-491c-af61-404efd47a67d/volumes" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.480437 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 12 13:20:35 crc kubenswrapper[4580]: E0112 13:20:35.480725 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c33063-8d34-491c-af61-404efd47a67d" containerName="init" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.480743 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c33063-8d34-491c-af61-404efd47a67d" containerName="init" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.480886 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c33063-8d34-491c-af61-404efd47a67d" containerName="init" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.481582 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.484228 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.484384 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mp4mw" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.484498 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.484684 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.505797 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.557298 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cab738-4c7c-4949-9a8c-50b8c1bca314-config\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.557402 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.557433 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11cab738-4c7c-4949-9a8c-50b8c1bca314-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.557462 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.557653 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.557803 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11cab738-4c7c-4949-9a8c-50b8c1bca314-scripts\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.557851 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2xc\" (UniqueName: \"kubernetes.io/projected/11cab738-4c7c-4949-9a8c-50b8c1bca314-kube-api-access-kf2xc\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.659190 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2xc\" (UniqueName: \"kubernetes.io/projected/11cab738-4c7c-4949-9a8c-50b8c1bca314-kube-api-access-kf2xc\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.659430 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cab738-4c7c-4949-9a8c-50b8c1bca314-config\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.659484 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.659524 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11cab738-4c7c-4949-9a8c-50b8c1bca314-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.659556 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.659940 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11cab738-4c7c-4949-9a8c-50b8c1bca314-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.660295 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11cab738-4c7c-4949-9a8c-50b8c1bca314-config\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.660432 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.660534 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11cab738-4c7c-4949-9a8c-50b8c1bca314-scripts\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.661087 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11cab738-4c7c-4949-9a8c-50b8c1bca314-scripts\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.664572 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.664713 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.667275 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cab738-4c7c-4949-9a8c-50b8c1bca314-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.671798 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2xc\" (UniqueName: \"kubernetes.io/projected/11cab738-4c7c-4949-9a8c-50b8c1bca314-kube-api-access-kf2xc\") pod \"ovn-northd-0\" (UID: \"11cab738-4c7c-4949-9a8c-50b8c1bca314\") " pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.793651 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.888262 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-nqxcw"] Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.918595 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ngwp9"] Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.919784 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.936920 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ngwp9"] Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.963873 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.963969 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.963995 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-config\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.964036 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.964120 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn9r5\" (UniqueName: \"kubernetes.io/projected/c870daa1-9b22-48e8-bb1b-a9f314328301-kube-api-access-pn9r5\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:35 crc kubenswrapper[4580]: I0112 13:20:35.996424 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.065555 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.065801 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn9r5\" (UniqueName: \"kubernetes.io/projected/c870daa1-9b22-48e8-bb1b-a9f314328301-kube-api-access-pn9r5\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.065881 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.065941 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.065959 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-config\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.066584 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.066707 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.066721 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-config\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.066947 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.082902 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn9r5\" (UniqueName: \"kubernetes.io/projected/c870daa1-9b22-48e8-bb1b-a9f314328301-kube-api-access-pn9r5\") pod \"dnsmasq-dns-67fdf7998c-ngwp9\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.236305 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.253740 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 12 13:20:36 crc kubenswrapper[4580]: W0112 13:20:36.256784 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11cab738_4c7c_4949_9a8c_50b8c1bca314.slice/crio-85fae5bc753fab81f18a743103679b83963885fbc5202c8d191baf628bee712b WatchSource:0}: Error finding container 85fae5bc753fab81f18a743103679b83963885fbc5202c8d191baf628bee712b: Status 404 returned error can't find the container with id 85fae5bc753fab81f18a743103679b83963885fbc5202c8d191baf628bee712b Jan 12 13:20:36 crc kubenswrapper[4580]: I0112 13:20:36.586734 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ngwp9"] Jan 12 13:20:36 crc kubenswrapper[4580]: W0112 13:20:36.593620 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc870daa1_9b22_48e8_bb1b_a9f314328301.slice/crio-476d5fffb8ac13c6e87d82fcbd179a0cd175384b8e6bf1640ff1c1673f597b2f WatchSource:0}: Error finding container 476d5fffb8ac13c6e87d82fcbd179a0cd175384b8e6bf1640ff1c1673f597b2f: Status 404 returned error can't find the container with id 476d5fffb8ac13c6e87d82fcbd179a0cd175384b8e6bf1640ff1c1673f597b2f Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.063694 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.068498 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.070085 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.070178 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.070222 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-zrgqc" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.070257 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.081509 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.181013 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fb14d02e-b9af-4072-a2bd-2c2763d29755-lock\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.181074 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.181094 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fb14d02e-b9af-4072-a2bd-2c2763d29755-cache\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.181132 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.181358 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5w6\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-kube-api-access-mn5w6\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.196817 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" event={"ID":"c870daa1-9b22-48e8-bb1b-a9f314328301","Type":"ContainerStarted","Data":"476d5fffb8ac13c6e87d82fcbd179a0cd175384b8e6bf1640ff1c1673f597b2f"} Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.198002 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"11cab738-4c7c-4949-9a8c-50b8c1bca314","Type":"ContainerStarted","Data":"85fae5bc753fab81f18a743103679b83963885fbc5202c8d191baf628bee712b"} Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.281907 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fb14d02e-b9af-4072-a2bd-2c2763d29755-lock\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.282475 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.282502 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fb14d02e-b9af-4072-a2bd-2c2763d29755-cache\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.282517 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.282571 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn5w6\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-kube-api-access-mn5w6\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.282432 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fb14d02e-b9af-4072-a2bd-2c2763d29755-lock\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: E0112 13:20:37.282912 4580 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 12 13:20:37 crc kubenswrapper[4580]: E0112 13:20:37.282929 4580 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 12 13:20:37 crc kubenswrapper[4580]: E0112 13:20:37.282956 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift podName:fb14d02e-b9af-4072-a2bd-2c2763d29755 nodeName:}" failed. No retries permitted until 2026-01-12 13:20:37.782945316 +0000 UTC m=+836.827164006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift") pod "swift-storage-0" (UID: "fb14d02e-b9af-4072-a2bd-2c2763d29755") : configmap "swift-ring-files" not found Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.283299 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fb14d02e-b9af-4072-a2bd-2c2763d29755-cache\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.283479 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.309535 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn5w6\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-kube-api-access-mn5w6\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.310306 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.632943 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cplpv"] Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.633803 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.635113 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.635879 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.635991 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.638536 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cplpv"] Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.688044 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-etc-swift\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.688086 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-combined-ca-bundle\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.688146 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-swiftconf\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.688175 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-ring-data-devices\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.688198 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-scripts\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.688220 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmdgb\" (UniqueName: \"kubernetes.io/projected/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-kube-api-access-zmdgb\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.688241 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-dispersionconf\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790173 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-etc-swift\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790223 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-combined-ca-bundle\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790270 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790305 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-swiftconf\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790349 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-ring-data-devices\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790377 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-scripts\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790415 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmdgb\" (UniqueName: \"kubernetes.io/projected/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-kube-api-access-zmdgb\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790442 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-dispersionconf\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: E0112 13:20:37.790458 4580 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 12 13:20:37 crc kubenswrapper[4580]: E0112 13:20:37.790484 4580 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 12 13:20:37 crc kubenswrapper[4580]: E0112 13:20:37.790528 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift podName:fb14d02e-b9af-4072-a2bd-2c2763d29755 nodeName:}" failed. No retries permitted until 2026-01-12 13:20:38.790515272 +0000 UTC m=+837.834733962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift") pod "swift-storage-0" (UID: "fb14d02e-b9af-4072-a2bd-2c2763d29755") : configmap "swift-ring-files" not found Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.790634 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-etc-swift\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.791167 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-scripts\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.791193 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-ring-data-devices\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.794882 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-swiftconf\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.817726 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmdgb\" (UniqueName: \"kubernetes.io/projected/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-kube-api-access-zmdgb\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.820349 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-dispersionconf\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.835602 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-combined-ca-bundle\") pod \"swift-ring-rebalance-cplpv\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.841818 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.909870 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 12 13:20:37 crc kubenswrapper[4580]: I0112 13:20:37.945378 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.216955 4580 generic.go:334] "Generic (PLEG): container finished" podID="c870daa1-9b22-48e8-bb1b-a9f314328301" containerID="ecd68b56e6f94c7312824d59dba7f2bbb7e04ae449ddca4fa70d580369a57cd2" exitCode=0 Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.217076 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" event={"ID":"c870daa1-9b22-48e8-bb1b-a9f314328301","Type":"ContainerDied","Data":"ecd68b56e6f94c7312824d59dba7f2bbb7e04ae449ddca4fa70d580369a57cd2"} Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.219575 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" event={"ID":"9386d106-dcfa-4440-98f4-512c13158ae9","Type":"ContainerStarted","Data":"2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320"} Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.219824 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" podUID="9386d106-dcfa-4440-98f4-512c13158ae9" containerName="dnsmasq-dns" containerID="cri-o://2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320" gracePeriod=10 Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.256437 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" podStartSLOduration=7.256423922 podStartE2EDuration="7.256423922s" podCreationTimestamp="2026-01-12 13:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:20:38.249005614 +0000 UTC m=+837.293224324" watchObservedRunningTime="2026-01-12 13:20:38.256423922 +0000 UTC m=+837.300642612" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.475736 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cplpv"] Jan 12 13:20:38 crc kubenswrapper[4580]: W0112 13:20:38.482630 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91a2e8de_56e6_41e5_a8fa_a576e8970ebd.slice/crio-7e5f89dbd6969c37b4a3a1eff2b52c5706307a0efaf3a3cba08bd4fffef493fe WatchSource:0}: Error finding container 7e5f89dbd6969c37b4a3a1eff2b52c5706307a0efaf3a3cba08bd4fffef493fe: Status 404 returned error can't find the container with id 7e5f89dbd6969c37b4a3a1eff2b52c5706307a0efaf3a3cba08bd4fffef493fe Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.543821 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.700690 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-sb\") pod \"9386d106-dcfa-4440-98f4-512c13158ae9\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.700785 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-nb\") pod \"9386d106-dcfa-4440-98f4-512c13158ae9\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.700806 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-dns-svc\") pod \"9386d106-dcfa-4440-98f4-512c13158ae9\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.700835 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-config\") pod \"9386d106-dcfa-4440-98f4-512c13158ae9\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.701148 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rh49\" (UniqueName: \"kubernetes.io/projected/9386d106-dcfa-4440-98f4-512c13158ae9-kube-api-access-8rh49\") pod \"9386d106-dcfa-4440-98f4-512c13158ae9\" (UID: \"9386d106-dcfa-4440-98f4-512c13158ae9\") " Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.704068 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9386d106-dcfa-4440-98f4-512c13158ae9-kube-api-access-8rh49" (OuterVolumeSpecName: "kube-api-access-8rh49") pod "9386d106-dcfa-4440-98f4-512c13158ae9" (UID: "9386d106-dcfa-4440-98f4-512c13158ae9"). InnerVolumeSpecName "kube-api-access-8rh49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.726434 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9386d106-dcfa-4440-98f4-512c13158ae9" (UID: "9386d106-dcfa-4440-98f4-512c13158ae9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.726878 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-config" (OuterVolumeSpecName: "config") pod "9386d106-dcfa-4440-98f4-512c13158ae9" (UID: "9386d106-dcfa-4440-98f4-512c13158ae9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.728563 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9386d106-dcfa-4440-98f4-512c13158ae9" (UID: "9386d106-dcfa-4440-98f4-512c13158ae9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.730935 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9386d106-dcfa-4440-98f4-512c13158ae9" (UID: "9386d106-dcfa-4440-98f4-512c13158ae9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.802547 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:38 crc kubenswrapper[4580]: E0112 13:20:38.802690 4580 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 12 13:20:38 crc kubenswrapper[4580]: E0112 13:20:38.802713 4580 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.802728 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.802740 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:38 crc kubenswrapper[4580]: E0112 13:20:38.802756 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift podName:fb14d02e-b9af-4072-a2bd-2c2763d29755 nodeName:}" failed. No retries permitted until 2026-01-12 13:20:40.802744205 +0000 UTC m=+839.846962895 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift") pod "swift-storage-0" (UID: "fb14d02e-b9af-4072-a2bd-2c2763d29755") : configmap "swift-ring-files" not found Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.802771 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.802782 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rh49\" (UniqueName: \"kubernetes.io/projected/9386d106-dcfa-4440-98f4-512c13158ae9-kube-api-access-8rh49\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:38 crc kubenswrapper[4580]: I0112 13:20:38.802792 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9386d106-dcfa-4440-98f4-512c13158ae9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.225724 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"11cab738-4c7c-4949-9a8c-50b8c1bca314","Type":"ContainerStarted","Data":"6842df94ca65a1f62914be8573265d5ca87e7f6439a0720a8be19c383912ed3c"} Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.225920 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.225931 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"11cab738-4c7c-4949-9a8c-50b8c1bca314","Type":"ContainerStarted","Data":"1ef162cf120a2d0a1601c5e301a528f1bcb4bd77a8022912acafbc5b0a790e17"} Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.226913 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cplpv" event={"ID":"91a2e8de-56e6-41e5-a8fa-a576e8970ebd","Type":"ContainerStarted","Data":"7e5f89dbd6969c37b4a3a1eff2b52c5706307a0efaf3a3cba08bd4fffef493fe"} Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.228439 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" event={"ID":"c870daa1-9b22-48e8-bb1b-a9f314328301","Type":"ContainerStarted","Data":"64e5422b6f887a38f9736f4c2c021b8b51cdc3582b0f4a7119fddfb4207f2419"} Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.228567 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.229874 4580 generic.go:334] "Generic (PLEG): container finished" podID="9386d106-dcfa-4440-98f4-512c13158ae9" containerID="2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320" exitCode=0 Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.229915 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.229916 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" event={"ID":"9386d106-dcfa-4440-98f4-512c13158ae9","Type":"ContainerDied","Data":"2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320"} Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.230062 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-nqxcw" event={"ID":"9386d106-dcfa-4440-98f4-512c13158ae9","Type":"ContainerDied","Data":"8f1bbe2b8fca184a81b80ed09d225a573b1baefa694a18d63da343dfeec124b2"} Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.230093 4580 scope.go:117] "RemoveContainer" containerID="2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.239522 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.420111165 podStartE2EDuration="4.2395111s" podCreationTimestamp="2026-01-12 13:20:35 +0000 UTC" firstStartedPulling="2026-01-12 13:20:36.258858323 +0000 UTC m=+835.303077013" lastFinishedPulling="2026-01-12 13:20:38.078258258 +0000 UTC m=+837.122476948" observedRunningTime="2026-01-12 13:20:39.239118703 +0000 UTC m=+838.283337393" watchObservedRunningTime="2026-01-12 13:20:39.2395111 +0000 UTC m=+838.283729791" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.245376 4580 scope.go:117] "RemoveContainer" containerID="1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.258150 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" podStartSLOduration=4.258142146 podStartE2EDuration="4.258142146s" podCreationTimestamp="2026-01-12 13:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:20:39.256348135 +0000 UTC m=+838.300566815" watchObservedRunningTime="2026-01-12 13:20:39.258142146 +0000 UTC m=+838.302360837" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.260096 4580 scope.go:117] "RemoveContainer" containerID="2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320" Jan 12 13:20:39 crc kubenswrapper[4580]: E0112 13:20:39.262902 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320\": container with ID starting with 2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320 not found: ID does not exist" containerID="2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.262937 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320"} err="failed to get container status \"2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320\": rpc error: code = NotFound desc = could not find container \"2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320\": container with ID starting with 2c818d214f9c1c684ae55d573e733b62c1c10132c2c3ab87e1745f2abc1ff320 not found: ID does not exist" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.263023 4580 scope.go:117] "RemoveContainer" containerID="1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685" Jan 12 13:20:39 crc kubenswrapper[4580]: E0112 13:20:39.264851 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685\": container with ID starting with 1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685 not found: ID does not exist" containerID="1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.264951 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685"} err="failed to get container status \"1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685\": rpc error: code = NotFound desc = could not find container \"1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685\": container with ID starting with 1a349e43f030e889887973e795b84d8f2dc86fba7b4a58da495c7fcb0a758685 not found: ID does not exist" Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.291119 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-nqxcw"] Jan 12 13:20:39 crc kubenswrapper[4580]: I0112 13:20:39.291148 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-nqxcw"] Jan 12 13:20:40 crc kubenswrapper[4580]: I0112 13:20:40.048938 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:40 crc kubenswrapper[4580]: I0112 13:20:40.101960 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 12 13:20:40 crc kubenswrapper[4580]: I0112 13:20:40.831267 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:40 crc kubenswrapper[4580]: E0112 13:20:40.831608 4580 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 12 13:20:40 crc kubenswrapper[4580]: E0112 13:20:40.831632 4580 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 12 13:20:40 crc kubenswrapper[4580]: E0112 13:20:40.831715 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift podName:fb14d02e-b9af-4072-a2bd-2c2763d29755 nodeName:}" failed. No retries permitted until 2026-01-12 13:20:44.83169878 +0000 UTC m=+843.875917470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift") pod "swift-storage-0" (UID: "fb14d02e-b9af-4072-a2bd-2c2763d29755") : configmap "swift-ring-files" not found Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.292257 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9386d106-dcfa-4440-98f4-512c13158ae9" path="/var/lib/kubelet/pods/9386d106-dcfa-4440-98f4-512c13158ae9/volumes" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.383974 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-t66g2"] Jan 12 13:20:41 crc kubenswrapper[4580]: E0112 13:20:41.384341 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9386d106-dcfa-4440-98f4-512c13158ae9" containerName="init" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.384356 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="9386d106-dcfa-4440-98f4-512c13158ae9" containerName="init" Jan 12 13:20:41 crc kubenswrapper[4580]: E0112 13:20:41.384437 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9386d106-dcfa-4440-98f4-512c13158ae9" containerName="dnsmasq-dns" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.384444 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="9386d106-dcfa-4440-98f4-512c13158ae9" containerName="dnsmasq-dns" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.384591 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="9386d106-dcfa-4440-98f4-512c13158ae9" containerName="dnsmasq-dns" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.385038 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.387545 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.393780 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t66g2"] Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.541567 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c27e926-54c3-4757-af00-c2deb463d02c-operator-scripts\") pod \"root-account-create-update-t66g2\" (UID: \"0c27e926-54c3-4757-af00-c2deb463d02c\") " pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.541921 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4pv8\" (UniqueName: \"kubernetes.io/projected/0c27e926-54c3-4757-af00-c2deb463d02c-kube-api-access-n4pv8\") pod \"root-account-create-update-t66g2\" (UID: \"0c27e926-54c3-4757-af00-c2deb463d02c\") " pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.642902 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4pv8\" (UniqueName: \"kubernetes.io/projected/0c27e926-54c3-4757-af00-c2deb463d02c-kube-api-access-n4pv8\") pod \"root-account-create-update-t66g2\" (UID: \"0c27e926-54c3-4757-af00-c2deb463d02c\") " pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.642955 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c27e926-54c3-4757-af00-c2deb463d02c-operator-scripts\") pod \"root-account-create-update-t66g2\" (UID: \"0c27e926-54c3-4757-af00-c2deb463d02c\") " pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.643671 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c27e926-54c3-4757-af00-c2deb463d02c-operator-scripts\") pod \"root-account-create-update-t66g2\" (UID: \"0c27e926-54c3-4757-af00-c2deb463d02c\") " pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.663594 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4pv8\" (UniqueName: \"kubernetes.io/projected/0c27e926-54c3-4757-af00-c2deb463d02c-kube-api-access-n4pv8\") pod \"root-account-create-update-t66g2\" (UID: \"0c27e926-54c3-4757-af00-c2deb463d02c\") " pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:41 crc kubenswrapper[4580]: I0112 13:20:41.701027 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:42 crc kubenswrapper[4580]: I0112 13:20:42.058594 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t66g2"] Jan 12 13:20:42 crc kubenswrapper[4580]: W0112 13:20:42.066558 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c27e926_54c3_4757_af00_c2deb463d02c.slice/crio-5b8d538144651ac6ac09cbc8635caf85d36455625eb592d2e56cd8b3c6694d8c WatchSource:0}: Error finding container 5b8d538144651ac6ac09cbc8635caf85d36455625eb592d2e56cd8b3c6694d8c: Status 404 returned error can't find the container with id 5b8d538144651ac6ac09cbc8635caf85d36455625eb592d2e56cd8b3c6694d8c Jan 12 13:20:42 crc kubenswrapper[4580]: I0112 13:20:42.247745 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t66g2" event={"ID":"0c27e926-54c3-4757-af00-c2deb463d02c","Type":"ContainerStarted","Data":"4b3acc025cd3bd8394f140c85823c905b06c0aaac0167239877ee597b24fabbb"} Jan 12 13:20:42 crc kubenswrapper[4580]: I0112 13:20:42.247782 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t66g2" event={"ID":"0c27e926-54c3-4757-af00-c2deb463d02c","Type":"ContainerStarted","Data":"5b8d538144651ac6ac09cbc8635caf85d36455625eb592d2e56cd8b3c6694d8c"} Jan 12 13:20:42 crc kubenswrapper[4580]: I0112 13:20:42.249616 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cplpv" event={"ID":"91a2e8de-56e6-41e5-a8fa-a576e8970ebd","Type":"ContainerStarted","Data":"9865005bdc53fbc4e61324bef222e352b3bba33af3cc2abf2bdee1d4ef7f83c9"} Jan 12 13:20:42 crc kubenswrapper[4580]: I0112 13:20:42.260057 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-t66g2" podStartSLOduration=1.260028319 podStartE2EDuration="1.260028319s" podCreationTimestamp="2026-01-12 13:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:20:42.256162796 +0000 UTC m=+841.300381486" watchObservedRunningTime="2026-01-12 13:20:42.260028319 +0000 UTC m=+841.304247009" Jan 12 13:20:42 crc kubenswrapper[4580]: I0112 13:20:42.271313 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cplpv" podStartSLOduration=2.34107041 podStartE2EDuration="5.271300077s" podCreationTimestamp="2026-01-12 13:20:37 +0000 UTC" firstStartedPulling="2026-01-12 13:20:38.484637218 +0000 UTC m=+837.528855909" lastFinishedPulling="2026-01-12 13:20:41.414866886 +0000 UTC m=+840.459085576" observedRunningTime="2026-01-12 13:20:42.268133808 +0000 UTC m=+841.312352497" watchObservedRunningTime="2026-01-12 13:20:42.271300077 +0000 UTC m=+841.315518767" Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.256888 4580 generic.go:334] "Generic (PLEG): container finished" podID="0c27e926-54c3-4757-af00-c2deb463d02c" containerID="4b3acc025cd3bd8394f140c85823c905b06c0aaac0167239877ee597b24fabbb" exitCode=0 Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.256982 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t66g2" event={"ID":"0c27e926-54c3-4757-af00-c2deb463d02c","Type":"ContainerDied","Data":"4b3acc025cd3bd8394f140c85823c905b06c0aaac0167239877ee597b24fabbb"} Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.881056 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-79cmh"] Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.881957 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.889163 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-79cmh"] Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.970563 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cd54-account-create-update-4xn8h"] Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.971725 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.973129 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 12 13:20:43 crc kubenswrapper[4580]: I0112 13:20:43.975712 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cd54-account-create-update-4xn8h"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.075029 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2s4l\" (UniqueName: \"kubernetes.io/projected/06d41309-dddf-49bc-9512-44a8ffa9de38-kube-api-access-x2s4l\") pod \"keystone-db-create-79cmh\" (UID: \"06d41309-dddf-49bc-9512-44a8ffa9de38\") " pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.075091 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06d41309-dddf-49bc-9512-44a8ffa9de38-operator-scripts\") pod \"keystone-db-create-79cmh\" (UID: \"06d41309-dddf-49bc-9512-44a8ffa9de38\") " pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.075175 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-operator-scripts\") pod \"keystone-cd54-account-create-update-4xn8h\" (UID: \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\") " pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.075194 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8rd\" (UniqueName: \"kubernetes.io/projected/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-kube-api-access-qr8rd\") pod \"keystone-cd54-account-create-update-4xn8h\" (UID: \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\") " pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.176812 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06d41309-dddf-49bc-9512-44a8ffa9de38-operator-scripts\") pod \"keystone-db-create-79cmh\" (UID: \"06d41309-dddf-49bc-9512-44a8ffa9de38\") " pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.177068 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-operator-scripts\") pod \"keystone-cd54-account-create-update-4xn8h\" (UID: \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\") " pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.177088 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr8rd\" (UniqueName: \"kubernetes.io/projected/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-kube-api-access-qr8rd\") pod \"keystone-cd54-account-create-update-4xn8h\" (UID: \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\") " pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.177182 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2s4l\" (UniqueName: \"kubernetes.io/projected/06d41309-dddf-49bc-9512-44a8ffa9de38-kube-api-access-x2s4l\") pod \"keystone-db-create-79cmh\" (UID: \"06d41309-dddf-49bc-9512-44a8ffa9de38\") " pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.177540 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06d41309-dddf-49bc-9512-44a8ffa9de38-operator-scripts\") pod \"keystone-db-create-79cmh\" (UID: \"06d41309-dddf-49bc-9512-44a8ffa9de38\") " pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.177974 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-operator-scripts\") pod \"keystone-cd54-account-create-update-4xn8h\" (UID: \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\") " pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.178388 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-59d8f"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.179179 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-59d8f" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.186022 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-59d8f"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.193906 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr8rd\" (UniqueName: \"kubernetes.io/projected/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-kube-api-access-qr8rd\") pod \"keystone-cd54-account-create-update-4xn8h\" (UID: \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\") " pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.196515 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2s4l\" (UniqueName: \"kubernetes.io/projected/06d41309-dddf-49bc-9512-44a8ffa9de38-kube-api-access-x2s4l\") pod \"keystone-db-create-79cmh\" (UID: \"06d41309-dddf-49bc-9512-44a8ffa9de38\") " pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.201639 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.298805 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.305173 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48mz\" (UniqueName: \"kubernetes.io/projected/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-kube-api-access-k48mz\") pod \"placement-db-create-59d8f\" (UID: \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\") " pod="openstack/placement-db-create-59d8f" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.305238 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-operator-scripts\") pod \"placement-db-create-59d8f\" (UID: \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\") " pod="openstack/placement-db-create-59d8f" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.305476 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4284-account-create-update-gs8hr"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.307992 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.309991 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.326698 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4284-account-create-update-gs8hr"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.399567 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mcg5w"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.400683 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.406627 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k48mz\" (UniqueName: \"kubernetes.io/projected/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-kube-api-access-k48mz\") pod \"placement-db-create-59d8f\" (UID: \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\") " pod="openstack/placement-db-create-59d8f" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.406668 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-operator-scripts\") pod \"placement-4284-account-create-update-gs8hr\" (UID: \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\") " pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.406710 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-operator-scripts\") pod \"placement-db-create-59d8f\" (UID: \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\") " pod="openstack/placement-db-create-59d8f" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.406777 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-operator-scripts\") pod \"glance-db-create-mcg5w\" (UID: \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\") " pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.406853 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs97w\" (UniqueName: \"kubernetes.io/projected/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-kube-api-access-cs97w\") pod \"placement-4284-account-create-update-gs8hr\" (UID: \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\") " pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.406869 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfp6t\" (UniqueName: \"kubernetes.io/projected/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-kube-api-access-vfp6t\") pod \"glance-db-create-mcg5w\" (UID: \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\") " pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.406927 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mcg5w"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.408385 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-operator-scripts\") pod \"placement-db-create-59d8f\" (UID: \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\") " pod="openstack/placement-db-create-59d8f" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.425753 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k48mz\" (UniqueName: \"kubernetes.io/projected/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-kube-api-access-k48mz\") pod \"placement-db-create-59d8f\" (UID: \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\") " pod="openstack/placement-db-create-59d8f" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.489629 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-59d8f" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.508671 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-operator-scripts\") pod \"glance-db-create-mcg5w\" (UID: \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\") " pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.509067 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs97w\" (UniqueName: \"kubernetes.io/projected/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-kube-api-access-cs97w\") pod \"placement-4284-account-create-update-gs8hr\" (UID: \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\") " pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.509114 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfp6t\" (UniqueName: \"kubernetes.io/projected/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-kube-api-access-vfp6t\") pod \"glance-db-create-mcg5w\" (UID: \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\") " pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.509157 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-operator-scripts\") pod \"placement-4284-account-create-update-gs8hr\" (UID: \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\") " pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.509315 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-operator-scripts\") pod \"glance-db-create-mcg5w\" (UID: \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\") " pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.509759 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-operator-scripts\") pod \"placement-4284-account-create-update-gs8hr\" (UID: \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\") " pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.510150 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4a4b-account-create-update-fmvkt"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.511234 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.512531 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.523275 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4a4b-account-create-update-fmvkt"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.524536 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs97w\" (UniqueName: \"kubernetes.io/projected/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-kube-api-access-cs97w\") pod \"placement-4284-account-create-update-gs8hr\" (UID: \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\") " pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.526525 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfp6t\" (UniqueName: \"kubernetes.io/projected/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-kube-api-access-vfp6t\") pod \"glance-db-create-mcg5w\" (UID: \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\") " pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.559475 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:44 crc kubenswrapper[4580]: W0112 13:20:44.585232 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06d41309_dddf_49bc_9512_44a8ffa9de38.slice/crio-bff7d9ef8e3b3a11ab8ad581917b7c447b9a89c24e773b00f44c83a13e421db5 WatchSource:0}: Error finding container bff7d9ef8e3b3a11ab8ad581917b7c447b9a89c24e773b00f44c83a13e421db5: Status 404 returned error can't find the container with id bff7d9ef8e3b3a11ab8ad581917b7c447b9a89c24e773b00f44c83a13e421db5 Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.586847 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-79cmh"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.609843 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c27e926-54c3-4757-af00-c2deb463d02c-operator-scripts\") pod \"0c27e926-54c3-4757-af00-c2deb463d02c\" (UID: \"0c27e926-54c3-4757-af00-c2deb463d02c\") " Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.609876 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4pv8\" (UniqueName: \"kubernetes.io/projected/0c27e926-54c3-4757-af00-c2deb463d02c-kube-api-access-n4pv8\") pod \"0c27e926-54c3-4757-af00-c2deb463d02c\" (UID: \"0c27e926-54c3-4757-af00-c2deb463d02c\") " Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.610066 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhp2p\" (UniqueName: \"kubernetes.io/projected/54bb983d-7347-4bae-a852-d70d0ff091f4-kube-api-access-bhp2p\") pod \"glance-4a4b-account-create-update-fmvkt\" (UID: \"54bb983d-7347-4bae-a852-d70d0ff091f4\") " pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.610157 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bb983d-7347-4bae-a852-d70d0ff091f4-operator-scripts\") pod \"glance-4a4b-account-create-update-fmvkt\" (UID: \"54bb983d-7347-4bae-a852-d70d0ff091f4\") " pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.610330 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c27e926-54c3-4757-af00-c2deb463d02c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c27e926-54c3-4757-af00-c2deb463d02c" (UID: "0c27e926-54c3-4757-af00-c2deb463d02c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.613152 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c27e926-54c3-4757-af00-c2deb463d02c-kube-api-access-n4pv8" (OuterVolumeSpecName: "kube-api-access-n4pv8") pod "0c27e926-54c3-4757-af00-c2deb463d02c" (UID: "0c27e926-54c3-4757-af00-c2deb463d02c"). InnerVolumeSpecName "kube-api-access-n4pv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.670271 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.711367 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bb983d-7347-4bae-a852-d70d0ff091f4-operator-scripts\") pod \"glance-4a4b-account-create-update-fmvkt\" (UID: \"54bb983d-7347-4bae-a852-d70d0ff091f4\") " pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.711497 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhp2p\" (UniqueName: \"kubernetes.io/projected/54bb983d-7347-4bae-a852-d70d0ff091f4-kube-api-access-bhp2p\") pod \"glance-4a4b-account-create-update-fmvkt\" (UID: \"54bb983d-7347-4bae-a852-d70d0ff091f4\") " pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.711579 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c27e926-54c3-4757-af00-c2deb463d02c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.711590 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4pv8\" (UniqueName: \"kubernetes.io/projected/0c27e926-54c3-4757-af00-c2deb463d02c-kube-api-access-n4pv8\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.712044 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bb983d-7347-4bae-a852-d70d0ff091f4-operator-scripts\") pod \"glance-4a4b-account-create-update-fmvkt\" (UID: \"54bb983d-7347-4bae-a852-d70d0ff091f4\") " pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.715995 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.724665 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhp2p\" (UniqueName: \"kubernetes.io/projected/54bb983d-7347-4bae-a852-d70d0ff091f4-kube-api-access-bhp2p\") pod \"glance-4a4b-account-create-update-fmvkt\" (UID: \"54bb983d-7347-4bae-a852-d70d0ff091f4\") " pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.764319 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cd54-account-create-update-4xn8h"] Jan 12 13:20:44 crc kubenswrapper[4580]: W0112 13:20:44.769260 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58ee3c58_c9e2_4be9_83d6_12c6d69801f9.slice/crio-b2614914711d08ef5f74453ebe8c99a8bac288383e1007040d2ffd713e622c6b WatchSource:0}: Error finding container b2614914711d08ef5f74453ebe8c99a8bac288383e1007040d2ffd713e622c6b: Status 404 returned error can't find the container with id b2614914711d08ef5f74453ebe8c99a8bac288383e1007040d2ffd713e622c6b Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.831468 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.858520 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-59d8f"] Jan 12 13:20:44 crc kubenswrapper[4580]: I0112 13:20:44.914589 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:44 crc kubenswrapper[4580]: E0112 13:20:44.914822 4580 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 12 13:20:44 crc kubenswrapper[4580]: E0112 13:20:44.914839 4580 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 12 13:20:44 crc kubenswrapper[4580]: E0112 13:20:44.914881 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift podName:fb14d02e-b9af-4072-a2bd-2c2763d29755 nodeName:}" failed. No retries permitted until 2026-01-12 13:20:52.914868432 +0000 UTC m=+851.959087122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift") pod "swift-storage-0" (UID: "fb14d02e-b9af-4072-a2bd-2c2763d29755") : configmap "swift-ring-files" not found Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.030872 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4284-account-create-update-gs8hr"] Jan 12 13:20:45 crc kubenswrapper[4580]: W0112 13:20:45.036721 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0ecccf2_f4b7_4606_afb9_b50486e02a0b.slice/crio-d1e659388e743cba9851af7fda3377412692d3b1e446ef66a3791641abf22b67 WatchSource:0}: Error finding container d1e659388e743cba9851af7fda3377412692d3b1e446ef66a3791641abf22b67: Status 404 returned error can't find the container with id d1e659388e743cba9851af7fda3377412692d3b1e446ef66a3791641abf22b67 Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.104938 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mcg5w"] Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.183542 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4a4b-account-create-update-fmvkt"] Jan 12 13:20:45 crc kubenswrapper[4580]: W0112 13:20:45.202114 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54bb983d_7347_4bae_a852_d70d0ff091f4.slice/crio-963bbfae8d981e1464814ee850dfac7e7cbaca65190c23908db96c6ba60889a5 WatchSource:0}: Error finding container 963bbfae8d981e1464814ee850dfac7e7cbaca65190c23908db96c6ba60889a5: Status 404 returned error can't find the container with id 963bbfae8d981e1464814ee850dfac7e7cbaca65190c23908db96c6ba60889a5 Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.279157 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mcg5w" event={"ID":"90c2604d-2aab-4817-b1a3-f1ce2921fd7c","Type":"ContainerStarted","Data":"aad67d3b7f0a320c0d8489162c37f84d4e96aa645218a2c823f6566c284ec778"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.280377 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4a4b-account-create-update-fmvkt" event={"ID":"54bb983d-7347-4bae-a852-d70d0ff091f4","Type":"ContainerStarted","Data":"963bbfae8d981e1464814ee850dfac7e7cbaca65190c23908db96c6ba60889a5"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.283241 4580 generic.go:334] "Generic (PLEG): container finished" podID="2d042099-c4b5-4bcb-a1bb-3d1765c30e40" containerID="e2f9ea3c2e46a9662b63d40aed632d3bcd606808feecc35ba320037099b24eb7" exitCode=0 Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.285785 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t66g2" Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.288524 4580 generic.go:334] "Generic (PLEG): container finished" podID="58ee3c58-c9e2-4be9-83d6-12c6d69801f9" containerID="b42eb21de98096d3a22670005405ed03a64631745c03491e9f601ec593f77c69" exitCode=0 Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.291995 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-59d8f" event={"ID":"2d042099-c4b5-4bcb-a1bb-3d1765c30e40","Type":"ContainerDied","Data":"e2f9ea3c2e46a9662b63d40aed632d3bcd606808feecc35ba320037099b24eb7"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.292026 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-59d8f" event={"ID":"2d042099-c4b5-4bcb-a1bb-3d1765c30e40","Type":"ContainerStarted","Data":"74d68250b7651f657d51f8bb7fa13a741d651da2446c070c7ba5b59618d16fae"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.292036 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t66g2" event={"ID":"0c27e926-54c3-4757-af00-c2deb463d02c","Type":"ContainerDied","Data":"5b8d538144651ac6ac09cbc8635caf85d36455625eb592d2e56cd8b3c6694d8c"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.292046 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8d538144651ac6ac09cbc8635caf85d36455625eb592d2e56cd8b3c6694d8c" Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.292054 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd54-account-create-update-4xn8h" event={"ID":"58ee3c58-c9e2-4be9-83d6-12c6d69801f9","Type":"ContainerDied","Data":"b42eb21de98096d3a22670005405ed03a64631745c03491e9f601ec593f77c69"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.292063 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd54-account-create-update-4xn8h" event={"ID":"58ee3c58-c9e2-4be9-83d6-12c6d69801f9","Type":"ContainerStarted","Data":"b2614914711d08ef5f74453ebe8c99a8bac288383e1007040d2ffd713e622c6b"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.292130 4580 generic.go:334] "Generic (PLEG): container finished" podID="06d41309-dddf-49bc-9512-44a8ffa9de38" containerID="207c701509200b9c85ea0ca2ffd4db8c0d71a5cf7b00c55cb4b907e136a1abcf" exitCode=0 Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.292171 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-79cmh" event={"ID":"06d41309-dddf-49bc-9512-44a8ffa9de38","Type":"ContainerDied","Data":"207c701509200b9c85ea0ca2ffd4db8c0d71a5cf7b00c55cb4b907e136a1abcf"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.292196 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-79cmh" event={"ID":"06d41309-dddf-49bc-9512-44a8ffa9de38","Type":"ContainerStarted","Data":"bff7d9ef8e3b3a11ab8ad581917b7c447b9a89c24e773b00f44c83a13e421db5"} Jan 12 13:20:45 crc kubenswrapper[4580]: I0112 13:20:45.294026 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4284-account-create-update-gs8hr" event={"ID":"b0ecccf2-f4b7-4606-afb9-b50486e02a0b","Type":"ContainerStarted","Data":"d1e659388e743cba9851af7fda3377412692d3b1e446ef66a3791641abf22b67"} Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.238209 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.288062 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-cpjhh"] Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.288257 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" podUID="b55248fd-efe4-466d-b3a3-fa9177008120" containerName="dnsmasq-dns" containerID="cri-o://e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061" gracePeriod=10 Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.308061 4580 generic.go:334] "Generic (PLEG): container finished" podID="b0ecccf2-f4b7-4606-afb9-b50486e02a0b" containerID="59e7e97714490c488dea268552d281ea855bb81ba1ca35de6faae31ed62d25ff" exitCode=0 Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.308168 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4284-account-create-update-gs8hr" event={"ID":"b0ecccf2-f4b7-4606-afb9-b50486e02a0b","Type":"ContainerDied","Data":"59e7e97714490c488dea268552d281ea855bb81ba1ca35de6faae31ed62d25ff"} Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.310046 4580 generic.go:334] "Generic (PLEG): container finished" podID="90c2604d-2aab-4817-b1a3-f1ce2921fd7c" containerID="1bd5a121d72b5ffa6a25064675ff9340cc9481f91501bc34faaa7f3b2a5c401d" exitCode=0 Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.310121 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mcg5w" event={"ID":"90c2604d-2aab-4817-b1a3-f1ce2921fd7c","Type":"ContainerDied","Data":"1bd5a121d72b5ffa6a25064675ff9340cc9481f91501bc34faaa7f3b2a5c401d"} Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.315638 4580 generic.go:334] "Generic (PLEG): container finished" podID="54bb983d-7347-4bae-a852-d70d0ff091f4" containerID="3919f075e75d2f7e909386a5d84e06b14c879361613b21c989b655f76148e499" exitCode=0 Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.315823 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4a4b-account-create-update-fmvkt" event={"ID":"54bb983d-7347-4bae-a852-d70d0ff091f4","Type":"ContainerDied","Data":"3919f075e75d2f7e909386a5d84e06b14c879361613b21c989b655f76148e499"} Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.662917 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-59d8f" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.839095 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.839355 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-operator-scripts\") pod \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\" (UID: \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\") " Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.839411 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k48mz\" (UniqueName: \"kubernetes.io/projected/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-kube-api-access-k48mz\") pod \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\" (UID: \"2d042099-c4b5-4bcb-a1bb-3d1765c30e40\") " Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.839769 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d042099-c4b5-4bcb-a1bb-3d1765c30e40" (UID: "2d042099-c4b5-4bcb-a1bb-3d1765c30e40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.839951 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.842088 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.844125 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-kube-api-access-k48mz" (OuterVolumeSpecName: "kube-api-access-k48mz") pod "2d042099-c4b5-4bcb-a1bb-3d1765c30e40" (UID: "2d042099-c4b5-4bcb-a1bb-3d1765c30e40"). InnerVolumeSpecName "kube-api-access-k48mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.845938 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.940513 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2s4l\" (UniqueName: \"kubernetes.io/projected/06d41309-dddf-49bc-9512-44a8ffa9de38-kube-api-access-x2s4l\") pod \"06d41309-dddf-49bc-9512-44a8ffa9de38\" (UID: \"06d41309-dddf-49bc-9512-44a8ffa9de38\") " Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.940588 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-dns-svc\") pod \"b55248fd-efe4-466d-b3a3-fa9177008120\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.940691 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzwmh\" (UniqueName: \"kubernetes.io/projected/b55248fd-efe4-466d-b3a3-fa9177008120-kube-api-access-tzwmh\") pod \"b55248fd-efe4-466d-b3a3-fa9177008120\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.940792 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06d41309-dddf-49bc-9512-44a8ffa9de38-operator-scripts\") pod \"06d41309-dddf-49bc-9512-44a8ffa9de38\" (UID: \"06d41309-dddf-49bc-9512-44a8ffa9de38\") " Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.940858 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-config\") pod \"b55248fd-efe4-466d-b3a3-fa9177008120\" (UID: \"b55248fd-efe4-466d-b3a3-fa9177008120\") " Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.941257 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k48mz\" (UniqueName: \"kubernetes.io/projected/2d042099-c4b5-4bcb-a1bb-3d1765c30e40-kube-api-access-k48mz\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.941298 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06d41309-dddf-49bc-9512-44a8ffa9de38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06d41309-dddf-49bc-9512-44a8ffa9de38" (UID: "06d41309-dddf-49bc-9512-44a8ffa9de38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.943309 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d41309-dddf-49bc-9512-44a8ffa9de38-kube-api-access-x2s4l" (OuterVolumeSpecName: "kube-api-access-x2s4l") pod "06d41309-dddf-49bc-9512-44a8ffa9de38" (UID: "06d41309-dddf-49bc-9512-44a8ffa9de38"). InnerVolumeSpecName "kube-api-access-x2s4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.943945 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55248fd-efe4-466d-b3a3-fa9177008120-kube-api-access-tzwmh" (OuterVolumeSpecName: "kube-api-access-tzwmh") pod "b55248fd-efe4-466d-b3a3-fa9177008120" (UID: "b55248fd-efe4-466d-b3a3-fa9177008120"). InnerVolumeSpecName "kube-api-access-tzwmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.950500 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.950537 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.950572 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.951029 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7850824be06012c20b6bb245ac92cc464dbe596b5ce9364073d6add3fc0a822e"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.951071 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://7850824be06012c20b6bb245ac92cc464dbe596b5ce9364073d6add3fc0a822e" gracePeriod=600 Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.968971 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-config" (OuterVolumeSpecName: "config") pod "b55248fd-efe4-466d-b3a3-fa9177008120" (UID: "b55248fd-efe4-466d-b3a3-fa9177008120"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:46 crc kubenswrapper[4580]: I0112 13:20:46.969758 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b55248fd-efe4-466d-b3a3-fa9177008120" (UID: "b55248fd-efe4-466d-b3a3-fa9177008120"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.042815 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr8rd\" (UniqueName: \"kubernetes.io/projected/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-kube-api-access-qr8rd\") pod \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\" (UID: \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\") " Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.042896 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-operator-scripts\") pod \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\" (UID: \"58ee3c58-c9e2-4be9-83d6-12c6d69801f9\") " Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.043225 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06d41309-dddf-49bc-9512-44a8ffa9de38-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.043243 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.043252 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2s4l\" (UniqueName: \"kubernetes.io/projected/06d41309-dddf-49bc-9512-44a8ffa9de38-kube-api-access-x2s4l\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.043261 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b55248fd-efe4-466d-b3a3-fa9177008120-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.043269 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzwmh\" (UniqueName: \"kubernetes.io/projected/b55248fd-efe4-466d-b3a3-fa9177008120-kube-api-access-tzwmh\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.043279 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58ee3c58-c9e2-4be9-83d6-12c6d69801f9" (UID: "58ee3c58-c9e2-4be9-83d6-12c6d69801f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.044953 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-kube-api-access-qr8rd" (OuterVolumeSpecName: "kube-api-access-qr8rd") pod "58ee3c58-c9e2-4be9-83d6-12c6d69801f9" (UID: "58ee3c58-c9e2-4be9-83d6-12c6d69801f9"). InnerVolumeSpecName "kube-api-access-qr8rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.144661 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.144751 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr8rd\" (UniqueName: \"kubernetes.io/projected/58ee3c58-c9e2-4be9-83d6-12c6d69801f9-kube-api-access-qr8rd\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.325929 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="7850824be06012c20b6bb245ac92cc464dbe596b5ce9364073d6add3fc0a822e" exitCode=0 Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.326132 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"7850824be06012c20b6bb245ac92cc464dbe596b5ce9364073d6add3fc0a822e"} Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.326197 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"62195f179f376ea4916eddf796027fa5a80271672d3171f47fa9237f1c01b2a4"} Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.326212 4580 scope.go:117] "RemoveContainer" containerID="4e7364093541422d6527d483a8a4570a7b048dfd23774d35d5dc7c8fcdefe657" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.327970 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd54-account-create-update-4xn8h" event={"ID":"58ee3c58-c9e2-4be9-83d6-12c6d69801f9","Type":"ContainerDied","Data":"b2614914711d08ef5f74453ebe8c99a8bac288383e1007040d2ffd713e622c6b"} Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.327997 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2614914711d08ef5f74453ebe8c99a8bac288383e1007040d2ffd713e622c6b" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.328060 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd54-account-create-update-4xn8h" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.330078 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-79cmh" event={"ID":"06d41309-dddf-49bc-9512-44a8ffa9de38","Type":"ContainerDied","Data":"bff7d9ef8e3b3a11ab8ad581917b7c447b9a89c24e773b00f44c83a13e421db5"} Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.330093 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-79cmh" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.330168 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bff7d9ef8e3b3a11ab8ad581917b7c447b9a89c24e773b00f44c83a13e421db5" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.332332 4580 generic.go:334] "Generic (PLEG): container finished" podID="b55248fd-efe4-466d-b3a3-fa9177008120" containerID="e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061" exitCode=0 Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.332395 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" event={"ID":"b55248fd-efe4-466d-b3a3-fa9177008120","Type":"ContainerDied","Data":"e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061"} Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.332417 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" event={"ID":"b55248fd-efe4-466d-b3a3-fa9177008120","Type":"ContainerDied","Data":"bf286ce80311e4269a21fd4a99481b618091a044f937c6471703f8600b5616af"} Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.332472 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-cpjhh" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.334476 4580 generic.go:334] "Generic (PLEG): container finished" podID="91a2e8de-56e6-41e5-a8fa-a576e8970ebd" containerID="9865005bdc53fbc4e61324bef222e352b3bba33af3cc2abf2bdee1d4ef7f83c9" exitCode=0 Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.334528 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cplpv" event={"ID":"91a2e8de-56e6-41e5-a8fa-a576e8970ebd","Type":"ContainerDied","Data":"9865005bdc53fbc4e61324bef222e352b3bba33af3cc2abf2bdee1d4ef7f83c9"} Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.339260 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-59d8f" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.341758 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-59d8f" event={"ID":"2d042099-c4b5-4bcb-a1bb-3d1765c30e40","Type":"ContainerDied","Data":"74d68250b7651f657d51f8bb7fa13a741d651da2446c070c7ba5b59618d16fae"} Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.341775 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d68250b7651f657d51f8bb7fa13a741d651da2446c070c7ba5b59618d16fae" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.347974 4580 scope.go:117] "RemoveContainer" containerID="e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.367158 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-cpjhh"] Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.375337 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-cpjhh"] Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.379047 4580 scope.go:117] "RemoveContainer" containerID="6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.395797 4580 scope.go:117] "RemoveContainer" containerID="e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061" Jan 12 13:20:47 crc kubenswrapper[4580]: E0112 13:20:47.396073 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061\": container with ID starting with e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061 not found: ID does not exist" containerID="e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.396097 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061"} err="failed to get container status \"e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061\": rpc error: code = NotFound desc = could not find container \"e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061\": container with ID starting with e694b563bc6bb154c5c7be3ef26bbe1834dc6a8a91a06d3d28a366d088abb061 not found: ID does not exist" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.396136 4580 scope.go:117] "RemoveContainer" containerID="6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6" Jan 12 13:20:47 crc kubenswrapper[4580]: E0112 13:20:47.396521 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6\": container with ID starting with 6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6 not found: ID does not exist" containerID="6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.396545 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6"} err="failed to get container status \"6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6\": rpc error: code = NotFound desc = could not find container \"6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6\": container with ID starting with 6ff3fb20561a87f09c95a3357bdb78f1fc47a01674c1e0f26c67a4bfab678dc6 not found: ID does not exist" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.669024 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-t66g2"] Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.673151 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-t66g2"] Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.678000 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.682971 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.685285 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.857312 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-operator-scripts\") pod \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\" (UID: \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\") " Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.857381 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-operator-scripts\") pod \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\" (UID: \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\") " Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.857398 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhp2p\" (UniqueName: \"kubernetes.io/projected/54bb983d-7347-4bae-a852-d70d0ff091f4-kube-api-access-bhp2p\") pod \"54bb983d-7347-4bae-a852-d70d0ff091f4\" (UID: \"54bb983d-7347-4bae-a852-d70d0ff091f4\") " Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.857414 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfp6t\" (UniqueName: \"kubernetes.io/projected/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-kube-api-access-vfp6t\") pod \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\" (UID: \"90c2604d-2aab-4817-b1a3-f1ce2921fd7c\") " Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.857453 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs97w\" (UniqueName: \"kubernetes.io/projected/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-kube-api-access-cs97w\") pod \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\" (UID: \"b0ecccf2-f4b7-4606-afb9-b50486e02a0b\") " Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.857470 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bb983d-7347-4bae-a852-d70d0ff091f4-operator-scripts\") pod \"54bb983d-7347-4bae-a852-d70d0ff091f4\" (UID: \"54bb983d-7347-4bae-a852-d70d0ff091f4\") " Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.858013 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0ecccf2-f4b7-4606-afb9-b50486e02a0b" (UID: "b0ecccf2-f4b7-4606-afb9-b50486e02a0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.858089 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bb983d-7347-4bae-a852-d70d0ff091f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54bb983d-7347-4bae-a852-d70d0ff091f4" (UID: "54bb983d-7347-4bae-a852-d70d0ff091f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.858314 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90c2604d-2aab-4817-b1a3-f1ce2921fd7c" (UID: "90c2604d-2aab-4817-b1a3-f1ce2921fd7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.861970 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bb983d-7347-4bae-a852-d70d0ff091f4-kube-api-access-bhp2p" (OuterVolumeSpecName: "kube-api-access-bhp2p") pod "54bb983d-7347-4bae-a852-d70d0ff091f4" (UID: "54bb983d-7347-4bae-a852-d70d0ff091f4"). InnerVolumeSpecName "kube-api-access-bhp2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.862191 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-kube-api-access-vfp6t" (OuterVolumeSpecName: "kube-api-access-vfp6t") pod "90c2604d-2aab-4817-b1a3-f1ce2921fd7c" (UID: "90c2604d-2aab-4817-b1a3-f1ce2921fd7c"). InnerVolumeSpecName "kube-api-access-vfp6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.862216 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-kube-api-access-cs97w" (OuterVolumeSpecName: "kube-api-access-cs97w") pod "b0ecccf2-f4b7-4606-afb9-b50486e02a0b" (UID: "b0ecccf2-f4b7-4606-afb9-b50486e02a0b"). InnerVolumeSpecName "kube-api-access-cs97w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.958829 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.958864 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.958873 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhp2p\" (UniqueName: \"kubernetes.io/projected/54bb983d-7347-4bae-a852-d70d0ff091f4-kube-api-access-bhp2p\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.958885 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfp6t\" (UniqueName: \"kubernetes.io/projected/90c2604d-2aab-4817-b1a3-f1ce2921fd7c-kube-api-access-vfp6t\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.958895 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs97w\" (UniqueName: \"kubernetes.io/projected/b0ecccf2-f4b7-4606-afb9-b50486e02a0b-kube-api-access-cs97w\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:47 crc kubenswrapper[4580]: I0112 13:20:47.958902 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bb983d-7347-4bae-a852-d70d0ff091f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.346004 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4284-account-create-update-gs8hr" event={"ID":"b0ecccf2-f4b7-4606-afb9-b50486e02a0b","Type":"ContainerDied","Data":"d1e659388e743cba9851af7fda3377412692d3b1e446ef66a3791641abf22b67"} Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.346191 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e659388e743cba9851af7fda3377412692d3b1e446ef66a3791641abf22b67" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.346016 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4284-account-create-update-gs8hr" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.348202 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mcg5w" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.348195 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mcg5w" event={"ID":"90c2604d-2aab-4817-b1a3-f1ce2921fd7c","Type":"ContainerDied","Data":"aad67d3b7f0a320c0d8489162c37f84d4e96aa645218a2c823f6566c284ec778"} Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.348347 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad67d3b7f0a320c0d8489162c37f84d4e96aa645218a2c823f6566c284ec778" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.349371 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4a4b-account-create-update-fmvkt" event={"ID":"54bb983d-7347-4bae-a852-d70d0ff091f4","Type":"ContainerDied","Data":"963bbfae8d981e1464814ee850dfac7e7cbaca65190c23908db96c6ba60889a5"} Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.349485 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963bbfae8d981e1464814ee850dfac7e7cbaca65190c23908db96c6ba60889a5" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.349530 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4a4b-account-create-update-fmvkt" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.595749 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.768767 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-ring-data-devices\") pod \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.768838 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-scripts\") pod \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.768860 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-etc-swift\") pod \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.768904 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-dispersionconf\") pod \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.768981 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmdgb\" (UniqueName: \"kubernetes.io/projected/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-kube-api-access-zmdgb\") pod \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.769016 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-swiftconf\") pod \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.769035 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-combined-ca-bundle\") pod \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\" (UID: \"91a2e8de-56e6-41e5-a8fa-a576e8970ebd\") " Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.769718 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "91a2e8de-56e6-41e5-a8fa-a576e8970ebd" (UID: "91a2e8de-56e6-41e5-a8fa-a576e8970ebd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.769895 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "91a2e8de-56e6-41e5-a8fa-a576e8970ebd" (UID: "91a2e8de-56e6-41e5-a8fa-a576e8970ebd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.773449 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-kube-api-access-zmdgb" (OuterVolumeSpecName: "kube-api-access-zmdgb") pod "91a2e8de-56e6-41e5-a8fa-a576e8970ebd" (UID: "91a2e8de-56e6-41e5-a8fa-a576e8970ebd"). InnerVolumeSpecName "kube-api-access-zmdgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.775838 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "91a2e8de-56e6-41e5-a8fa-a576e8970ebd" (UID: "91a2e8de-56e6-41e5-a8fa-a576e8970ebd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.785095 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-scripts" (OuterVolumeSpecName: "scripts") pod "91a2e8de-56e6-41e5-a8fa-a576e8970ebd" (UID: "91a2e8de-56e6-41e5-a8fa-a576e8970ebd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.786342 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "91a2e8de-56e6-41e5-a8fa-a576e8970ebd" (UID: "91a2e8de-56e6-41e5-a8fa-a576e8970ebd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.786629 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91a2e8de-56e6-41e5-a8fa-a576e8970ebd" (UID: "91a2e8de-56e6-41e5-a8fa-a576e8970ebd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.870143 4580 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.870169 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.870180 4580 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.870188 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.870196 4580 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.870204 4580 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:48 crc kubenswrapper[4580]: I0112 13:20:48.870213 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmdgb\" (UniqueName: \"kubernetes.io/projected/91a2e8de-56e6-41e5-a8fa-a576e8970ebd-kube-api-access-zmdgb\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.289156 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c27e926-54c3-4757-af00-c2deb463d02c" path="/var/lib/kubelet/pods/0c27e926-54c3-4757-af00-c2deb463d02c/volumes" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.289651 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55248fd-efe4-466d-b3a3-fa9177008120" path="/var/lib/kubelet/pods/b55248fd-efe4-466d-b3a3-fa9177008120/volumes" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.358041 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cplpv" event={"ID":"91a2e8de-56e6-41e5-a8fa-a576e8970ebd","Type":"ContainerDied","Data":"7e5f89dbd6969c37b4a3a1eff2b52c5706307a0efaf3a3cba08bd4fffef493fe"} Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.358223 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5f89dbd6969c37b4a3a1eff2b52c5706307a0efaf3a3cba08bd4fffef493fe" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.358274 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cplpv" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802339 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wxmgx"] Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802606 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a2e8de-56e6-41e5-a8fa-a576e8970ebd" containerName="swift-ring-rebalance" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802625 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a2e8de-56e6-41e5-a8fa-a576e8970ebd" containerName="swift-ring-rebalance" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802647 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55248fd-efe4-466d-b3a3-fa9177008120" containerName="dnsmasq-dns" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802653 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55248fd-efe4-466d-b3a3-fa9177008120" containerName="dnsmasq-dns" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802662 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c2604d-2aab-4817-b1a3-f1ce2921fd7c" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802668 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c2604d-2aab-4817-b1a3-f1ce2921fd7c" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802681 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bb983d-7347-4bae-a852-d70d0ff091f4" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802686 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bb983d-7347-4bae-a852-d70d0ff091f4" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802696 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ee3c58-c9e2-4be9-83d6-12c6d69801f9" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802701 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ee3c58-c9e2-4be9-83d6-12c6d69801f9" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802722 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d042099-c4b5-4bcb-a1bb-3d1765c30e40" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802727 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d042099-c4b5-4bcb-a1bb-3d1765c30e40" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802736 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c27e926-54c3-4757-af00-c2deb463d02c" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802741 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c27e926-54c3-4757-af00-c2deb463d02c" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802751 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55248fd-efe4-466d-b3a3-fa9177008120" containerName="init" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802757 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55248fd-efe4-466d-b3a3-fa9177008120" containerName="init" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802769 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ecccf2-f4b7-4606-afb9-b50486e02a0b" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802775 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ecccf2-f4b7-4606-afb9-b50486e02a0b" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: E0112 13:20:49.802789 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d41309-dddf-49bc-9512-44a8ffa9de38" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802794 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d41309-dddf-49bc-9512-44a8ffa9de38" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802932 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d41309-dddf-49bc-9512-44a8ffa9de38" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802946 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a2e8de-56e6-41e5-a8fa-a576e8970ebd" containerName="swift-ring-rebalance" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802957 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ee3c58-c9e2-4be9-83d6-12c6d69801f9" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802964 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c2604d-2aab-4817-b1a3-f1ce2921fd7c" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802972 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ecccf2-f4b7-4606-afb9-b50486e02a0b" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802981 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="54bb983d-7347-4bae-a852-d70d0ff091f4" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802989 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55248fd-efe4-466d-b3a3-fa9177008120" containerName="dnsmasq-dns" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.802998 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d042099-c4b5-4bcb-a1bb-3d1765c30e40" containerName="mariadb-database-create" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.803007 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c27e926-54c3-4757-af00-c2deb463d02c" containerName="mariadb-account-create-update" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.803450 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.805543 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.806040 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6rzjz" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.808084 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wxmgx"] Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.984256 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-986kg\" (UniqueName: \"kubernetes.io/projected/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-kube-api-access-986kg\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.984334 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-combined-ca-bundle\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.984444 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-config-data\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:49 crc kubenswrapper[4580]: I0112 13:20:49.984617 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-db-sync-config-data\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.085679 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-db-sync-config-data\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.085739 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-986kg\" (UniqueName: \"kubernetes.io/projected/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-kube-api-access-986kg\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.085788 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-combined-ca-bundle\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.085812 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-config-data\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.089475 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-config-data\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.089529 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-combined-ca-bundle\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.090216 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-db-sync-config-data\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.097857 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-986kg\" (UniqueName: \"kubernetes.io/projected/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-kube-api-access-986kg\") pod \"glance-db-sync-wxmgx\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.115975 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wxmgx" Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.597318 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wxmgx"] Jan 12 13:20:50 crc kubenswrapper[4580]: W0112 13:20:50.602622 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1b2da09_96cd_45b9_b3bd_720a8e5d354b.slice/crio-c50f099f848c1fb559b4fe61361d72d95bfb08de1cb7618f91951652c9987197 WatchSource:0}: Error finding container c50f099f848c1fb559b4fe61361d72d95bfb08de1cb7618f91951652c9987197: Status 404 returned error can't find the container with id c50f099f848c1fb559b4fe61361d72d95bfb08de1cb7618f91951652c9987197 Jan 12 13:20:50 crc kubenswrapper[4580]: I0112 13:20:50.834747 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 12 13:20:51 crc kubenswrapper[4580]: I0112 13:20:51.369592 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wxmgx" event={"ID":"d1b2da09-96cd-45b9-b3bd-720a8e5d354b","Type":"ContainerStarted","Data":"c50f099f848c1fb559b4fe61361d72d95bfb08de1cb7618f91951652c9987197"} Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.662736 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hhffz"] Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.663708 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.665329 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.667012 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hhffz"] Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.717692 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87jwb\" (UniqueName: \"kubernetes.io/projected/eeb87416-55e3-414c-a7d2-8248c85883ef-kube-api-access-87jwb\") pod \"root-account-create-update-hhffz\" (UID: \"eeb87416-55e3-414c-a7d2-8248c85883ef\") " pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.717755 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeb87416-55e3-414c-a7d2-8248c85883ef-operator-scripts\") pod \"root-account-create-update-hhffz\" (UID: \"eeb87416-55e3-414c-a7d2-8248c85883ef\") " pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.820090 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeb87416-55e3-414c-a7d2-8248c85883ef-operator-scripts\") pod \"root-account-create-update-hhffz\" (UID: \"eeb87416-55e3-414c-a7d2-8248c85883ef\") " pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.820291 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87jwb\" (UniqueName: \"kubernetes.io/projected/eeb87416-55e3-414c-a7d2-8248c85883ef-kube-api-access-87jwb\") pod \"root-account-create-update-hhffz\" (UID: \"eeb87416-55e3-414c-a7d2-8248c85883ef\") " pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.821206 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeb87416-55e3-414c-a7d2-8248c85883ef-operator-scripts\") pod \"root-account-create-update-hhffz\" (UID: \"eeb87416-55e3-414c-a7d2-8248c85883ef\") " pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.837397 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87jwb\" (UniqueName: \"kubernetes.io/projected/eeb87416-55e3-414c-a7d2-8248c85883ef-kube-api-access-87jwb\") pod \"root-account-create-update-hhffz\" (UID: \"eeb87416-55e3-414c-a7d2-8248c85883ef\") " pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.920919 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.924389 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fb14d02e-b9af-4072-a2bd-2c2763d29755-etc-swift\") pod \"swift-storage-0\" (UID: \"fb14d02e-b9af-4072-a2bd-2c2763d29755\") " pod="openstack/swift-storage-0" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.979817 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 12 13:20:52 crc kubenswrapper[4580]: I0112 13:20:52.981505 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:53 crc kubenswrapper[4580]: I0112 13:20:53.412968 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hhffz"] Jan 12 13:20:53 crc kubenswrapper[4580]: W0112 13:20:53.422036 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb87416_55e3_414c_a7d2_8248c85883ef.slice/crio-b46e1dc099deb2657c54bf27af542718dda7843cbf9c426c87bca93be3132826 WatchSource:0}: Error finding container b46e1dc099deb2657c54bf27af542718dda7843cbf9c426c87bca93be3132826: Status 404 returned error can't find the container with id b46e1dc099deb2657c54bf27af542718dda7843cbf9c426c87bca93be3132826 Jan 12 13:20:53 crc kubenswrapper[4580]: I0112 13:20:53.425064 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 12 13:20:54 crc kubenswrapper[4580]: I0112 13:20:54.392087 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"8ea6dd595993d37e5b0e0a18a855e90e2c7733e126bf9cbc544a605747c8e618"} Jan 12 13:20:54 crc kubenswrapper[4580]: I0112 13:20:54.393804 4580 generic.go:334] "Generic (PLEG): container finished" podID="eeb87416-55e3-414c-a7d2-8248c85883ef" containerID="8b16d6c23e87d6c3ea7dbdd47884edb1b77b68c62764a1e1fbc4f4987a6b88a1" exitCode=0 Jan 12 13:20:54 crc kubenswrapper[4580]: I0112 13:20:54.393845 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hhffz" event={"ID":"eeb87416-55e3-414c-a7d2-8248c85883ef","Type":"ContainerDied","Data":"8b16d6c23e87d6c3ea7dbdd47884edb1b77b68c62764a1e1fbc4f4987a6b88a1"} Jan 12 13:20:54 crc kubenswrapper[4580]: I0112 13:20:54.393871 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hhffz" event={"ID":"eeb87416-55e3-414c-a7d2-8248c85883ef","Type":"ContainerStarted","Data":"b46e1dc099deb2657c54bf27af542718dda7843cbf9c426c87bca93be3132826"} Jan 12 13:20:58 crc kubenswrapper[4580]: I0112 13:20:58.587592 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:58 crc kubenswrapper[4580]: I0112 13:20:58.687473 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87jwb\" (UniqueName: \"kubernetes.io/projected/eeb87416-55e3-414c-a7d2-8248c85883ef-kube-api-access-87jwb\") pod \"eeb87416-55e3-414c-a7d2-8248c85883ef\" (UID: \"eeb87416-55e3-414c-a7d2-8248c85883ef\") " Jan 12 13:20:58 crc kubenswrapper[4580]: I0112 13:20:58.687714 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeb87416-55e3-414c-a7d2-8248c85883ef-operator-scripts\") pod \"eeb87416-55e3-414c-a7d2-8248c85883ef\" (UID: \"eeb87416-55e3-414c-a7d2-8248c85883ef\") " Jan 12 13:20:58 crc kubenswrapper[4580]: I0112 13:20:58.688273 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb87416-55e3-414c-a7d2-8248c85883ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eeb87416-55e3-414c-a7d2-8248c85883ef" (UID: "eeb87416-55e3-414c-a7d2-8248c85883ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:20:58 crc kubenswrapper[4580]: I0112 13:20:58.692096 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb87416-55e3-414c-a7d2-8248c85883ef-kube-api-access-87jwb" (OuterVolumeSpecName: "kube-api-access-87jwb") pod "eeb87416-55e3-414c-a7d2-8248c85883ef" (UID: "eeb87416-55e3-414c-a7d2-8248c85883ef"). InnerVolumeSpecName "kube-api-access-87jwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:20:58 crc kubenswrapper[4580]: I0112 13:20:58.789044 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eeb87416-55e3-414c-a7d2-8248c85883ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:58 crc kubenswrapper[4580]: I0112 13:20:58.789070 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87jwb\" (UniqueName: \"kubernetes.io/projected/eeb87416-55e3-414c-a7d2-8248c85883ef-kube-api-access-87jwb\") on node \"crc\" DevicePath \"\"" Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.423061 4580 generic.go:334] "Generic (PLEG): container finished" podID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerID="ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434" exitCode=0 Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.423149 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ee1d970-f295-46eb-91eb-70a45cb019c1","Type":"ContainerDied","Data":"ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434"} Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.424806 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wxmgx" event={"ID":"d1b2da09-96cd-45b9-b3bd-720a8e5d354b","Type":"ContainerStarted","Data":"c4185759bec5c4dbc9ef8e7c55449106a9d766e56cfd897eb244d56801c306be"} Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.426941 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hhffz" event={"ID":"eeb87416-55e3-414c-a7d2-8248c85883ef","Type":"ContainerDied","Data":"b46e1dc099deb2657c54bf27af542718dda7843cbf9c426c87bca93be3132826"} Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.426968 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46e1dc099deb2657c54bf27af542718dda7843cbf9c426c87bca93be3132826" Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.426986 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hhffz" Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.428427 4580 generic.go:334] "Generic (PLEG): container finished" podID="20148d96-39b6-4278-9d29-91874ad352a0" containerID="a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d" exitCode=0 Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.428494 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20148d96-39b6-4278-9d29-91874ad352a0","Type":"ContainerDied","Data":"a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d"} Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.430600 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"9f857d26650a290605fdc2fcdc73bf4155432d0aa8a8720bf60afde6dec081b8"} Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.430624 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"fd3549c31055e3db1e561bd4b0847ff7ff1ed51d1459b6b0cd6377f925d9257c"} Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.430634 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"430df8b8a033ef60ffaa31acd15fecd63bc6a2b01e8984722c8d139de03ef898"} Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.430643 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"240d7ae09b16cacb24822411379293ffac9caf949183f158fcf0b3ac65595590"} Jan 12 13:20:59 crc kubenswrapper[4580]: I0112 13:20:59.480402 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wxmgx" podStartSLOduration=2.329146272 podStartE2EDuration="10.48038892s" podCreationTimestamp="2026-01-12 13:20:49 +0000 UTC" firstStartedPulling="2026-01-12 13:20:50.604324304 +0000 UTC m=+849.648542994" lastFinishedPulling="2026-01-12 13:20:58.755566952 +0000 UTC m=+857.799785642" observedRunningTime="2026-01-12 13:20:59.477466353 +0000 UTC m=+858.521685043" watchObservedRunningTime="2026-01-12 13:20:59.48038892 +0000 UTC m=+858.524607610" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.438486 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20148d96-39b6-4278-9d29-91874ad352a0","Type":"ContainerStarted","Data":"e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1"} Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.438979 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.440336 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ee1d970-f295-46eb-91eb-70a45cb019c1","Type":"ContainerStarted","Data":"001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63"} Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.440542 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.462255 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.933538387 podStartE2EDuration="51.462236869s" podCreationTimestamp="2026-01-12 13:20:09 +0000 UTC" firstStartedPulling="2026-01-12 13:20:19.417351767 +0000 UTC m=+818.461570457" lastFinishedPulling="2026-01-12 13:20:25.946050249 +0000 UTC m=+824.990268939" observedRunningTime="2026-01-12 13:21:00.460578771 +0000 UTC m=+859.504797471" watchObservedRunningTime="2026-01-12 13:21:00.462236869 +0000 UTC m=+859.506455559" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.475823 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.52597235 podStartE2EDuration="50.475813736s" podCreationTimestamp="2026-01-12 13:20:10 +0000 UTC" firstStartedPulling="2026-01-12 13:20:18.948657603 +0000 UTC m=+817.992876293" lastFinishedPulling="2026-01-12 13:20:24.898498989 +0000 UTC m=+823.942717679" observedRunningTime="2026-01-12 13:21:00.475016757 +0000 UTC m=+859.519235447" watchObservedRunningTime="2026-01-12 13:21:00.475813736 +0000 UTC m=+859.520032425" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.518400 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tbpzb" podUID="29dabf99-ffd5-4d31-b9e5-b10e192f239d" containerName="ovn-controller" probeResult="failure" output=< Jan 12 13:21:00 crc kubenswrapper[4580]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 12 13:21:00 crc kubenswrapper[4580]: > Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.544276 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.545226 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-66wld" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.737280 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tbpzb-config-cmw9n"] Jan 12 13:21:00 crc kubenswrapper[4580]: E0112 13:21:00.737692 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb87416-55e3-414c-a7d2-8248c85883ef" containerName="mariadb-account-create-update" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.737705 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb87416-55e3-414c-a7d2-8248c85883ef" containerName="mariadb-account-create-update" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.737875 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb87416-55e3-414c-a7d2-8248c85883ef" containerName="mariadb-account-create-update" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.738287 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.739935 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.750504 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tbpzb-config-cmw9n"] Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.916074 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-scripts\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.916130 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-log-ovn\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.916165 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run-ovn\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.916198 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgz8\" (UniqueName: \"kubernetes.io/projected/311b249b-bf27-48ee-b167-bee55b485326-kube-api-access-zbgz8\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.916221 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:00 crc kubenswrapper[4580]: I0112 13:21:00.916259 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-additional-scripts\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.017855 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-scripts\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.017892 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-log-ovn\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.017921 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run-ovn\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.017958 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbgz8\" (UniqueName: \"kubernetes.io/projected/311b249b-bf27-48ee-b167-bee55b485326-kube-api-access-zbgz8\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.017976 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.018017 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-additional-scripts\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.018345 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-log-ovn\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.018383 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.018446 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run-ovn\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.018614 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-additional-scripts\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.019632 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-scripts\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.033200 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbgz8\" (UniqueName: \"kubernetes.io/projected/311b249b-bf27-48ee-b167-bee55b485326-kube-api-access-zbgz8\") pod \"ovn-controller-tbpzb-config-cmw9n\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.051647 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.448231 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"04bdfc4ad5c186bab50dd1d143286426e09f027a0859abd996dc1d02a2a8e6bb"} Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.448611 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"d879f96dba7191da0c033d945b7355df7be218fc14c38bfafe7220b5355a75bd"} Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.448623 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"617a729f77a93bbc73379809617c8651b1dc43f9b4fa5a61f25f43d4fdcc0ac5"} Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.448631 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"d32978e218b74113e320ce5540240e041c154b8f7097265ee99b4870db409dc3"} Jan 12 13:21:01 crc kubenswrapper[4580]: W0112 13:21:01.466711 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod311b249b_bf27_48ee_b167_bee55b485326.slice/crio-02e0e5dc1e28cf0c3a2b6ad0091d7cfe73f63cc85f051adf2167fbe7f361e43f WatchSource:0}: Error finding container 02e0e5dc1e28cf0c3a2b6ad0091d7cfe73f63cc85f051adf2167fbe7f361e43f: Status 404 returned error can't find the container with id 02e0e5dc1e28cf0c3a2b6ad0091d7cfe73f63cc85f051adf2167fbe7f361e43f Jan 12 13:21:01 crc kubenswrapper[4580]: I0112 13:21:01.467582 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tbpzb-config-cmw9n"] Jan 12 13:21:02 crc kubenswrapper[4580]: I0112 13:21:02.460167 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"8c8dade595bdd2f5c1cda29d8a2b1d16c1a563f94e268e4b5c4f0f2f3b67d87c"} Jan 12 13:21:02 crc kubenswrapper[4580]: I0112 13:21:02.462885 4580 generic.go:334] "Generic (PLEG): container finished" podID="311b249b-bf27-48ee-b167-bee55b485326" containerID="4a528aa57c41d5d1c370a68e7e5a32fffe0542a02a3c390991ea6f4179e34318" exitCode=0 Jan 12 13:21:02 crc kubenswrapper[4580]: I0112 13:21:02.462911 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tbpzb-config-cmw9n" event={"ID":"311b249b-bf27-48ee-b167-bee55b485326","Type":"ContainerDied","Data":"4a528aa57c41d5d1c370a68e7e5a32fffe0542a02a3c390991ea6f4179e34318"} Jan 12 13:21:02 crc kubenswrapper[4580]: I0112 13:21:02.462927 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tbpzb-config-cmw9n" event={"ID":"311b249b-bf27-48ee-b167-bee55b485326","Type":"ContainerStarted","Data":"02e0e5dc1e28cf0c3a2b6ad0091d7cfe73f63cc85f051adf2167fbe7f361e43f"} Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.476957 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"e0a2abf66174f5cdb744024d613975e81ef550e650d933609cec6fa120ee47c2"} Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.477485 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"021bbe4a7885009ee6a8809786f710d6c4e717ac52ed21c733c802ce1a1c2ddc"} Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.477507 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"31e2c20dfd619f82feda695107e7e653bc85d45e06a99f54fb459d2ec6dbfe63"} Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.477522 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"e145be4ecbd67c711016c0f1454b918f4e4f7fe84af6e40cc6478f74b0420d10"} Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.477531 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"652f1a1f473b65b7f058448c79f875d8754dc4c733e49267130474cd549afada"} Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.477543 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fb14d02e-b9af-4072-a2bd-2c2763d29755","Type":"ContainerStarted","Data":"9a8bc87f92a763adc75ede3859cb9efc38ba99b3ccc416fad6a31d8c4ed2cfb7"} Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.478537 4580 generic.go:334] "Generic (PLEG): container finished" podID="d1b2da09-96cd-45b9-b3bd-720a8e5d354b" containerID="c4185759bec5c4dbc9ef8e7c55449106a9d766e56cfd897eb244d56801c306be" exitCode=0 Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.478618 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wxmgx" event={"ID":"d1b2da09-96cd-45b9-b3bd-720a8e5d354b","Type":"ContainerDied","Data":"c4185759bec5c4dbc9ef8e7c55449106a9d766e56cfd897eb244d56801c306be"} Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.523742 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=18.671326077 podStartE2EDuration="27.523720293s" podCreationTimestamp="2026-01-12 13:20:36 +0000 UTC" firstStartedPulling="2026-01-12 13:20:53.426930358 +0000 UTC m=+852.471149048" lastFinishedPulling="2026-01-12 13:21:02.279324574 +0000 UTC m=+861.323543264" observedRunningTime="2026-01-12 13:21:03.511609293 +0000 UTC m=+862.555827984" watchObservedRunningTime="2026-01-12 13:21:03.523720293 +0000 UTC m=+862.567938982" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.719000 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.763819 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hxh46"] Jan 12 13:21:03 crc kubenswrapper[4580]: E0112 13:21:03.764199 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311b249b-bf27-48ee-b167-bee55b485326" containerName="ovn-config" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.764217 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="311b249b-bf27-48ee-b167-bee55b485326" containerName="ovn-config" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.764367 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="311b249b-bf27-48ee-b167-bee55b485326" containerName="ovn-config" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.765118 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.766900 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.781676 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hxh46"] Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.870590 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run-ovn\") pod \"311b249b-bf27-48ee-b167-bee55b485326\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.870696 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "311b249b-bf27-48ee-b167-bee55b485326" (UID: "311b249b-bf27-48ee-b167-bee55b485326"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.870780 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbgz8\" (UniqueName: \"kubernetes.io/projected/311b249b-bf27-48ee-b167-bee55b485326-kube-api-access-zbgz8\") pod \"311b249b-bf27-48ee-b167-bee55b485326\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.870835 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-scripts\") pod \"311b249b-bf27-48ee-b167-bee55b485326\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871010 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run\") pod \"311b249b-bf27-48ee-b167-bee55b485326\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871072 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-additional-scripts\") pod \"311b249b-bf27-48ee-b167-bee55b485326\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871150 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-log-ovn\") pod \"311b249b-bf27-48ee-b167-bee55b485326\" (UID: \"311b249b-bf27-48ee-b167-bee55b485326\") " Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871233 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run" (OuterVolumeSpecName: "var-run") pod "311b249b-bf27-48ee-b167-bee55b485326" (UID: "311b249b-bf27-48ee-b167-bee55b485326"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871407 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "311b249b-bf27-48ee-b167-bee55b485326" (UID: "311b249b-bf27-48ee-b167-bee55b485326"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871544 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871577 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871603 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmjh\" (UniqueName: \"kubernetes.io/projected/964b43a1-1d4b-4068-8ebd-ccc80d774805-kube-api-access-xlmjh\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871630 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-svc\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871667 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871690 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-config\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871679 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "311b249b-bf27-48ee-b167-bee55b485326" (UID: "311b249b-bf27-48ee-b167-bee55b485326"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871739 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-scripts" (OuterVolumeSpecName: "scripts") pod "311b249b-bf27-48ee-b167-bee55b485326" (UID: "311b249b-bf27-48ee-b167-bee55b485326"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871771 4580 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871804 4580 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.871816 4580 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/311b249b-bf27-48ee-b167-bee55b485326-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.875562 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311b249b-bf27-48ee-b167-bee55b485326-kube-api-access-zbgz8" (OuterVolumeSpecName: "kube-api-access-zbgz8") pod "311b249b-bf27-48ee-b167-bee55b485326" (UID: "311b249b-bf27-48ee-b167-bee55b485326"). InnerVolumeSpecName "kube-api-access-zbgz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.973601 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmjh\" (UniqueName: \"kubernetes.io/projected/964b43a1-1d4b-4068-8ebd-ccc80d774805-kube-api-access-xlmjh\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.973705 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-svc\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.973853 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.973893 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-config\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.974134 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.974178 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.974270 4580 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.974286 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbgz8\" (UniqueName: \"kubernetes.io/projected/311b249b-bf27-48ee-b167-bee55b485326-kube-api-access-zbgz8\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.974301 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/311b249b-bf27-48ee-b167-bee55b485326-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.975114 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.975209 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-svc\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.975231 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-config\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.975238 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.975241 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:03 crc kubenswrapper[4580]: I0112 13:21:03.989579 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmjh\" (UniqueName: \"kubernetes.io/projected/964b43a1-1d4b-4068-8ebd-ccc80d774805-kube-api-access-xlmjh\") pod \"dnsmasq-dns-8db84466c-hxh46\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.079247 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.452330 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hxh46"] Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.485259 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hxh46" event={"ID":"964b43a1-1d4b-4068-8ebd-ccc80d774805","Type":"ContainerStarted","Data":"704803f523f7b382d134fe5ab365871ab1f226a8ba00bbcf52385808c0743250"} Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.487253 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tbpzb-config-cmw9n" event={"ID":"311b249b-bf27-48ee-b167-bee55b485326","Type":"ContainerDied","Data":"02e0e5dc1e28cf0c3a2b6ad0091d7cfe73f63cc85f051adf2167fbe7f361e43f"} Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.487296 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e0e5dc1e28cf0c3a2b6ad0091d7cfe73f63cc85f051adf2167fbe7f361e43f" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.487333 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tbpzb-config-cmw9n" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.734164 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wxmgx" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.811622 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-tbpzb-config-cmw9n"] Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.816625 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-tbpzb-config-cmw9n"] Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.888909 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-986kg\" (UniqueName: \"kubernetes.io/projected/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-kube-api-access-986kg\") pod \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.888953 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-combined-ca-bundle\") pod \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.888994 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-db-sync-config-data\") pod \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.889035 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-config-data\") pod \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\" (UID: \"d1b2da09-96cd-45b9-b3bd-720a8e5d354b\") " Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.898028 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d1b2da09-96cd-45b9-b3bd-720a8e5d354b" (UID: "d1b2da09-96cd-45b9-b3bd-720a8e5d354b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.909060 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-kube-api-access-986kg" (OuterVolumeSpecName: "kube-api-access-986kg") pod "d1b2da09-96cd-45b9-b3bd-720a8e5d354b" (UID: "d1b2da09-96cd-45b9-b3bd-720a8e5d354b"). InnerVolumeSpecName "kube-api-access-986kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.918819 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1b2da09-96cd-45b9-b3bd-720a8e5d354b" (UID: "d1b2da09-96cd-45b9-b3bd-720a8e5d354b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.944245 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-config-data" (OuterVolumeSpecName: "config-data") pod "d1b2da09-96cd-45b9-b3bd-720a8e5d354b" (UID: "d1b2da09-96cd-45b9-b3bd-720a8e5d354b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.991302 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.991335 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-986kg\" (UniqueName: \"kubernetes.io/projected/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-kube-api-access-986kg\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.991350 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:04 crc kubenswrapper[4580]: I0112 13:21:04.991359 4580 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1b2da09-96cd-45b9-b3bd-720a8e5d354b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.295055 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311b249b-bf27-48ee-b167-bee55b485326" path="/var/lib/kubelet/pods/311b249b-bf27-48ee-b167-bee55b485326/volumes" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.494515 4580 generic.go:334] "Generic (PLEG): container finished" podID="964b43a1-1d4b-4068-8ebd-ccc80d774805" containerID="67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8" exitCode=0 Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.494592 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hxh46" event={"ID":"964b43a1-1d4b-4068-8ebd-ccc80d774805","Type":"ContainerDied","Data":"67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8"} Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.496221 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wxmgx" event={"ID":"d1b2da09-96cd-45b9-b3bd-720a8e5d354b","Type":"ContainerDied","Data":"c50f099f848c1fb559b4fe61361d72d95bfb08de1cb7618f91951652c9987197"} Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.496295 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c50f099f848c1fb559b4fe61361d72d95bfb08de1cb7618f91951652c9987197" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.496263 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wxmgx" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.561510 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-tbpzb" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.926668 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hxh46"] Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.952047 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-2f659"] Jan 12 13:21:05 crc kubenswrapper[4580]: E0112 13:21:05.952314 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b2da09-96cd-45b9-b3bd-720a8e5d354b" containerName="glance-db-sync" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.952331 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b2da09-96cd-45b9-b3bd-720a8e5d354b" containerName="glance-db-sync" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.952470 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b2da09-96cd-45b9-b3bd-720a8e5d354b" containerName="glance-db-sync" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.953160 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:05 crc kubenswrapper[4580]: I0112 13:21:05.973501 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-2f659"] Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.108237 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.108299 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.108334 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6ssj\" (UniqueName: \"kubernetes.io/projected/15e17da6-3f67-4b22-8c60-02b342fece99-kube-api-access-w6ssj\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.108392 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.108417 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.108481 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-config\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.208968 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.209012 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.209042 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6ssj\" (UniqueName: \"kubernetes.io/projected/15e17da6-3f67-4b22-8c60-02b342fece99-kube-api-access-w6ssj\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.209078 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.209123 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.209201 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-config\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.209929 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.209938 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.209980 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.210172 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-config\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.210566 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.226796 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6ssj\" (UniqueName: \"kubernetes.io/projected/15e17da6-3f67-4b22-8c60-02b342fece99-kube-api-access-w6ssj\") pod \"dnsmasq-dns-74dfc89d77-2f659\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.267284 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.504999 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hxh46" event={"ID":"964b43a1-1d4b-4068-8ebd-ccc80d774805","Type":"ContainerStarted","Data":"022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578"} Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.506172 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.521937 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-hxh46" podStartSLOduration=3.5219224540000003 podStartE2EDuration="3.521922454s" podCreationTimestamp="2026-01-12 13:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:06.519332212 +0000 UTC m=+865.563550901" watchObservedRunningTime="2026-01-12 13:21:06.521922454 +0000 UTC m=+865.566141144" Jan 12 13:21:06 crc kubenswrapper[4580]: I0112 13:21:06.654308 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-2f659"] Jan 12 13:21:06 crc kubenswrapper[4580]: W0112 13:21:06.655998 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e17da6_3f67_4b22_8c60_02b342fece99.slice/crio-147aaa3f385682ee8ec10ecbbf8ada52bfd76cfd71813de666a005b632aee128 WatchSource:0}: Error finding container 147aaa3f385682ee8ec10ecbbf8ada52bfd76cfd71813de666a005b632aee128: Status 404 returned error can't find the container with id 147aaa3f385682ee8ec10ecbbf8ada52bfd76cfd71813de666a005b632aee128 Jan 12 13:21:07 crc kubenswrapper[4580]: I0112 13:21:07.514897 4580 generic.go:334] "Generic (PLEG): container finished" podID="15e17da6-3f67-4b22-8c60-02b342fece99" containerID="f4995f0142e35d309436d0daf5bb31ac0a9ef9420cea92ec5a5b2a139d14dc4c" exitCode=0 Jan 12 13:21:07 crc kubenswrapper[4580]: I0112 13:21:07.514942 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" event={"ID":"15e17da6-3f67-4b22-8c60-02b342fece99","Type":"ContainerDied","Data":"f4995f0142e35d309436d0daf5bb31ac0a9ef9420cea92ec5a5b2a139d14dc4c"} Jan 12 13:21:07 crc kubenswrapper[4580]: I0112 13:21:07.515243 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" event={"ID":"15e17da6-3f67-4b22-8c60-02b342fece99","Type":"ContainerStarted","Data":"147aaa3f385682ee8ec10ecbbf8ada52bfd76cfd71813de666a005b632aee128"} Jan 12 13:21:07 crc kubenswrapper[4580]: I0112 13:21:07.515363 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-hxh46" podUID="964b43a1-1d4b-4068-8ebd-ccc80d774805" containerName="dnsmasq-dns" containerID="cri-o://022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578" gracePeriod=10 Jan 12 13:21:07 crc kubenswrapper[4580]: I0112 13:21:07.882210 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.044775 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlmjh\" (UniqueName: \"kubernetes.io/projected/964b43a1-1d4b-4068-8ebd-ccc80d774805-kube-api-access-xlmjh\") pod \"964b43a1-1d4b-4068-8ebd-ccc80d774805\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.044878 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-sb\") pod \"964b43a1-1d4b-4068-8ebd-ccc80d774805\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.045056 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-svc\") pod \"964b43a1-1d4b-4068-8ebd-ccc80d774805\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.045136 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-config\") pod \"964b43a1-1d4b-4068-8ebd-ccc80d774805\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.045178 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-nb\") pod \"964b43a1-1d4b-4068-8ebd-ccc80d774805\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.045201 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-swift-storage-0\") pod \"964b43a1-1d4b-4068-8ebd-ccc80d774805\" (UID: \"964b43a1-1d4b-4068-8ebd-ccc80d774805\") " Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.050922 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964b43a1-1d4b-4068-8ebd-ccc80d774805-kube-api-access-xlmjh" (OuterVolumeSpecName: "kube-api-access-xlmjh") pod "964b43a1-1d4b-4068-8ebd-ccc80d774805" (UID: "964b43a1-1d4b-4068-8ebd-ccc80d774805"). InnerVolumeSpecName "kube-api-access-xlmjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.078850 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "964b43a1-1d4b-4068-8ebd-ccc80d774805" (UID: "964b43a1-1d4b-4068-8ebd-ccc80d774805"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.082730 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-config" (OuterVolumeSpecName: "config") pod "964b43a1-1d4b-4068-8ebd-ccc80d774805" (UID: "964b43a1-1d4b-4068-8ebd-ccc80d774805"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.085688 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "964b43a1-1d4b-4068-8ebd-ccc80d774805" (UID: "964b43a1-1d4b-4068-8ebd-ccc80d774805"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.098338 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "964b43a1-1d4b-4068-8ebd-ccc80d774805" (UID: "964b43a1-1d4b-4068-8ebd-ccc80d774805"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.099085 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "964b43a1-1d4b-4068-8ebd-ccc80d774805" (UID: "964b43a1-1d4b-4068-8ebd-ccc80d774805"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.147096 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.147138 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.147151 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.147162 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.147172 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlmjh\" (UniqueName: \"kubernetes.io/projected/964b43a1-1d4b-4068-8ebd-ccc80d774805-kube-api-access-xlmjh\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.147179 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/964b43a1-1d4b-4068-8ebd-ccc80d774805-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.523226 4580 generic.go:334] "Generic (PLEG): container finished" podID="964b43a1-1d4b-4068-8ebd-ccc80d774805" containerID="022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578" exitCode=0 Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.523270 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hxh46" event={"ID":"964b43a1-1d4b-4068-8ebd-ccc80d774805","Type":"ContainerDied","Data":"022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578"} Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.524158 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hxh46" event={"ID":"964b43a1-1d4b-4068-8ebd-ccc80d774805","Type":"ContainerDied","Data":"704803f523f7b382d134fe5ab365871ab1f226a8ba00bbcf52385808c0743250"} Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.524185 4580 scope.go:117] "RemoveContainer" containerID="022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.523282 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-hxh46" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.526347 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" event={"ID":"15e17da6-3f67-4b22-8c60-02b342fece99","Type":"ContainerStarted","Data":"5b500add603a491beede5a7374e71500271e26875c3dfe833b2695a1c0962fcf"} Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.526610 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.558076 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" podStartSLOduration=3.558049935 podStartE2EDuration="3.558049935s" podCreationTimestamp="2026-01-12 13:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:08.549054966 +0000 UTC m=+867.593273677" watchObservedRunningTime="2026-01-12 13:21:08.558049935 +0000 UTC m=+867.602268785" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.568203 4580 scope.go:117] "RemoveContainer" containerID="67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.574844 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hxh46"] Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.581096 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hxh46"] Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.585474 4580 scope.go:117] "RemoveContainer" containerID="022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578" Jan 12 13:21:08 crc kubenswrapper[4580]: E0112 13:21:08.585951 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578\": container with ID starting with 022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578 not found: ID does not exist" containerID="022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.585989 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578"} err="failed to get container status \"022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578\": rpc error: code = NotFound desc = could not find container \"022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578\": container with ID starting with 022100d5e85cd88962699de339c6aae38c4c7d355061b49cb1d0aeed3de8f578 not found: ID does not exist" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.586013 4580 scope.go:117] "RemoveContainer" containerID="67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8" Jan 12 13:21:08 crc kubenswrapper[4580]: E0112 13:21:08.586324 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8\": container with ID starting with 67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8 not found: ID does not exist" containerID="67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8" Jan 12 13:21:08 crc kubenswrapper[4580]: I0112 13:21:08.586358 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8"} err="failed to get container status \"67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8\": rpc error: code = NotFound desc = could not find container \"67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8\": container with ID starting with 67e4f72daa89619589a126fc6101395a8748060209f655a19c1469df4169bbb8 not found: ID does not exist" Jan 12 13:21:09 crc kubenswrapper[4580]: I0112 13:21:09.290310 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964b43a1-1d4b-4068-8ebd-ccc80d774805" path="/var/lib/kubelet/pods/964b43a1-1d4b-4068-8ebd-ccc80d774805/volumes" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.361193 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.382847 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6k6zv"] Jan 12 13:21:11 crc kubenswrapper[4580]: E0112 13:21:11.383191 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964b43a1-1d4b-4068-8ebd-ccc80d774805" containerName="init" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.383209 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="964b43a1-1d4b-4068-8ebd-ccc80d774805" containerName="init" Jan 12 13:21:11 crc kubenswrapper[4580]: E0112 13:21:11.383221 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964b43a1-1d4b-4068-8ebd-ccc80d774805" containerName="dnsmasq-dns" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.383230 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="964b43a1-1d4b-4068-8ebd-ccc80d774805" containerName="dnsmasq-dns" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.383396 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="964b43a1-1d4b-4068-8ebd-ccc80d774805" containerName="dnsmasq-dns" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.384424 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.400071 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k6zv"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.512564 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8dq2\" (UniqueName: \"kubernetes.io/projected/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-kube-api-access-j8dq2\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.512853 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-catalog-content\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.512910 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-utilities\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.564386 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.614013 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-catalog-content\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.614063 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-utilities\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.614164 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8dq2\" (UniqueName: \"kubernetes.io/projected/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-kube-api-access-j8dq2\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.614471 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-catalog-content\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.614634 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-utilities\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.663033 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8dq2\" (UniqueName: \"kubernetes.io/projected/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-kube-api-access-j8dq2\") pod \"redhat-marketplace-6k6zv\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.697775 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.775061 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-bsrgf"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.775930 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.798173 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bsrgf"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.815735 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-r4xf5"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.816949 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnszb\" (UniqueName: \"kubernetes.io/projected/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-kube-api-access-hnszb\") pod \"cinder-db-create-bsrgf\" (UID: \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\") " pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.817077 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.817175 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-operator-scripts\") pod \"cinder-db-create-bsrgf\" (UID: \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\") " pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.843602 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c9e5-account-create-update-vj7p5"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.844649 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.849394 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.855997 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r4xf5"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.869359 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c9e5-account-create-update-vj7p5"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.919563 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b1c4-account-create-update-sxf66"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.920665 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.921038 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-operator-scripts\") pod \"cinder-c9e5-account-create-update-vj7p5\" (UID: \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\") " pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.921078 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xprbt\" (UniqueName: \"kubernetes.io/projected/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-kube-api-access-xprbt\") pod \"cinder-c9e5-account-create-update-vj7p5\" (UID: \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\") " pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.921174 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnszb\" (UniqueName: \"kubernetes.io/projected/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-kube-api-access-hnszb\") pod \"cinder-db-create-bsrgf\" (UID: \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\") " pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.921210 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khm8m\" (UniqueName: \"kubernetes.io/projected/1c601173-b4e4-482a-a6e7-c5d7ff359a05-kube-api-access-khm8m\") pod \"barbican-db-create-r4xf5\" (UID: \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\") " pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.921247 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c601173-b4e4-482a-a6e7-c5d7ff359a05-operator-scripts\") pod \"barbican-db-create-r4xf5\" (UID: \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\") " pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.921288 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-operator-scripts\") pod \"cinder-db-create-bsrgf\" (UID: \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\") " pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.921986 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-operator-scripts\") pod \"cinder-db-create-bsrgf\" (UID: \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\") " pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.925197 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.934359 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b1c4-account-create-update-sxf66"] Jan 12 13:21:11 crc kubenswrapper[4580]: I0112 13:21:11.972549 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnszb\" (UniqueName: \"kubernetes.io/projected/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-kube-api-access-hnszb\") pod \"cinder-db-create-bsrgf\" (UID: \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\") " pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.023861 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5k5p\" (UniqueName: \"kubernetes.io/projected/7aece380-6756-4ba8-8628-1cded9cd4005-kube-api-access-p5k5p\") pod \"barbican-b1c4-account-create-update-sxf66\" (UID: \"7aece380-6756-4ba8-8628-1cded9cd4005\") " pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.023923 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aece380-6756-4ba8-8628-1cded9cd4005-operator-scripts\") pod \"barbican-b1c4-account-create-update-sxf66\" (UID: \"7aece380-6756-4ba8-8628-1cded9cd4005\") " pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.024011 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-operator-scripts\") pod \"cinder-c9e5-account-create-update-vj7p5\" (UID: \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\") " pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.024054 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xprbt\" (UniqueName: \"kubernetes.io/projected/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-kube-api-access-xprbt\") pod \"cinder-c9e5-account-create-update-vj7p5\" (UID: \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\") " pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.024137 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khm8m\" (UniqueName: \"kubernetes.io/projected/1c601173-b4e4-482a-a6e7-c5d7ff359a05-kube-api-access-khm8m\") pod \"barbican-db-create-r4xf5\" (UID: \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\") " pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.024243 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c601173-b4e4-482a-a6e7-c5d7ff359a05-operator-scripts\") pod \"barbican-db-create-r4xf5\" (UID: \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\") " pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.024803 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-operator-scripts\") pod \"cinder-c9e5-account-create-update-vj7p5\" (UID: \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\") " pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.027367 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c601173-b4e4-482a-a6e7-c5d7ff359a05-operator-scripts\") pod \"barbican-db-create-r4xf5\" (UID: \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\") " pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.029326 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-pgk6t"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.030269 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.049626 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pgk6t"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.060531 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xprbt\" (UniqueName: \"kubernetes.io/projected/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-kube-api-access-xprbt\") pod \"cinder-c9e5-account-create-update-vj7p5\" (UID: \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\") " pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.065575 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khm8m\" (UniqueName: \"kubernetes.io/projected/1c601173-b4e4-482a-a6e7-c5d7ff359a05-kube-api-access-khm8m\") pod \"barbican-db-create-r4xf5\" (UID: \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\") " pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.095906 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.126905 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-operator-scripts\") pod \"neutron-db-create-pgk6t\" (UID: \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\") " pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.126990 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g846l\" (UniqueName: \"kubernetes.io/projected/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-kube-api-access-g846l\") pod \"neutron-db-create-pgk6t\" (UID: \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\") " pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.127295 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5k5p\" (UniqueName: \"kubernetes.io/projected/7aece380-6756-4ba8-8628-1cded9cd4005-kube-api-access-p5k5p\") pod \"barbican-b1c4-account-create-update-sxf66\" (UID: \"7aece380-6756-4ba8-8628-1cded9cd4005\") " pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.127362 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aece380-6756-4ba8-8628-1cded9cd4005-operator-scripts\") pod \"barbican-b1c4-account-create-update-sxf66\" (UID: \"7aece380-6756-4ba8-8628-1cded9cd4005\") " pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.128374 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aece380-6756-4ba8-8628-1cded9cd4005-operator-scripts\") pod \"barbican-b1c4-account-create-update-sxf66\" (UID: \"7aece380-6756-4ba8-8628-1cded9cd4005\") " pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.135242 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k6zv"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.136471 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.150949 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5k5p\" (UniqueName: \"kubernetes.io/projected/7aece380-6756-4ba8-8628-1cded9cd4005-kube-api-access-p5k5p\") pod \"barbican-b1c4-account-create-update-sxf66\" (UID: \"7aece380-6756-4ba8-8628-1cded9cd4005\") " pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.210171 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.229301 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-operator-scripts\") pod \"neutron-db-create-pgk6t\" (UID: \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\") " pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.229363 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g846l\" (UniqueName: \"kubernetes.io/projected/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-kube-api-access-g846l\") pod \"neutron-db-create-pgk6t\" (UID: \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\") " pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.229852 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8712-account-create-update-6c2pj"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.230459 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-operator-scripts\") pod \"neutron-db-create-pgk6t\" (UID: \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\") " pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.231053 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.233253 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.238612 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.247945 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8712-account-create-update-6c2pj"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.248420 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g846l\" (UniqueName: \"kubernetes.io/projected/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-kube-api-access-g846l\") pod \"neutron-db-create-pgk6t\" (UID: \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\") " pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.259295 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x4v84"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.260641 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.266798 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.266989 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x4v84"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.267086 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.267160 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.267234 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-64jgc" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.331587 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-config-data\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.331954 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-combined-ca-bundle\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.332029 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5z8\" (UniqueName: \"kubernetes.io/projected/0609dc3a-35c3-4be4-8625-aad80295f0ea-kube-api-access-zq5z8\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.332052 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7s2\" (UniqueName: \"kubernetes.io/projected/7ac9c176-3792-4485-b000-a8cfc4c53f21-kube-api-access-xp7s2\") pod \"neutron-8712-account-create-update-6c2pj\" (UID: \"7ac9c176-3792-4485-b000-a8cfc4c53f21\") " pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.332082 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac9c176-3792-4485-b000-a8cfc4c53f21-operator-scripts\") pod \"neutron-8712-account-create-update-6c2pj\" (UID: \"7ac9c176-3792-4485-b000-a8cfc4c53f21\") " pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.350519 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.434174 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-config-data\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.434295 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-combined-ca-bundle\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.434366 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5z8\" (UniqueName: \"kubernetes.io/projected/0609dc3a-35c3-4be4-8625-aad80295f0ea-kube-api-access-zq5z8\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.434388 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7s2\" (UniqueName: \"kubernetes.io/projected/7ac9c176-3792-4485-b000-a8cfc4c53f21-kube-api-access-xp7s2\") pod \"neutron-8712-account-create-update-6c2pj\" (UID: \"7ac9c176-3792-4485-b000-a8cfc4c53f21\") " pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.434835 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac9c176-3792-4485-b000-a8cfc4c53f21-operator-scripts\") pod \"neutron-8712-account-create-update-6c2pj\" (UID: \"7ac9c176-3792-4485-b000-a8cfc4c53f21\") " pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.435671 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac9c176-3792-4485-b000-a8cfc4c53f21-operator-scripts\") pod \"neutron-8712-account-create-update-6c2pj\" (UID: \"7ac9c176-3792-4485-b000-a8cfc4c53f21\") " pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.442845 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-combined-ca-bundle\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.442887 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-config-data\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.451939 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7s2\" (UniqueName: \"kubernetes.io/projected/7ac9c176-3792-4485-b000-a8cfc4c53f21-kube-api-access-xp7s2\") pod \"neutron-8712-account-create-update-6c2pj\" (UID: \"7ac9c176-3792-4485-b000-a8cfc4c53f21\") " pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.453711 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5z8\" (UniqueName: \"kubernetes.io/projected/0609dc3a-35c3-4be4-8625-aad80295f0ea-kube-api-access-zq5z8\") pod \"keystone-db-sync-x4v84\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.547200 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.579681 4580 generic.go:334] "Generic (PLEG): container finished" podID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerID="bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3" exitCode=0 Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.579727 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k6zv" event={"ID":"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a","Type":"ContainerDied","Data":"bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3"} Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.579753 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k6zv" event={"ID":"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a","Type":"ContainerStarted","Data":"d2adab61128f3044e831ab44cfc65204c14e9772746ee778be82c41d9ca42e7d"} Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.581481 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.598913 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bsrgf"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.665476 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r4xf5"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.711093 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pgk6t"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.747322 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c9e5-account-create-update-vj7p5"] Jan 12 13:21:12 crc kubenswrapper[4580]: I0112 13:21:12.825772 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b1c4-account-create-update-sxf66"] Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.089776 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x4v84"] Jan 12 13:21:13 crc kubenswrapper[4580]: W0112 13:21:13.130999 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0609dc3a_35c3_4be4_8625_aad80295f0ea.slice/crio-56fc84687ed6f83fe4fff0cebcd8b0cf50db99c770b28222a76a8d4217fbe00c WatchSource:0}: Error finding container 56fc84687ed6f83fe4fff0cebcd8b0cf50db99c770b28222a76a8d4217fbe00c: Status 404 returned error can't find the container with id 56fc84687ed6f83fe4fff0cebcd8b0cf50db99c770b28222a76a8d4217fbe00c Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.175039 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8712-account-create-update-6c2pj"] Jan 12 13:21:13 crc kubenswrapper[4580]: W0112 13:21:13.316733 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac9c176_3792_4485_b000_a8cfc4c53f21.slice/crio-c4dd9359425caff65d5a42409ea1ca37a88ddf90f0eb84ffa3bc224ccabf794c WatchSource:0}: Error finding container c4dd9359425caff65d5a42409ea1ca37a88ddf90f0eb84ffa3bc224ccabf794c: Status 404 returned error can't find the container with id c4dd9359425caff65d5a42409ea1ca37a88ddf90f0eb84ffa3bc224ccabf794c Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.587972 4580 generic.go:334] "Generic (PLEG): container finished" podID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerID="986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b" exitCode=0 Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.588037 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k6zv" event={"ID":"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a","Type":"ContainerDied","Data":"986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.596036 4580 generic.go:334] "Generic (PLEG): container finished" podID="1c601173-b4e4-482a-a6e7-c5d7ff359a05" containerID="4c6f96f05ab13c382074308c3e73b4c335cb41f3bf56a815ed468b366edd7be7" exitCode=0 Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.596095 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r4xf5" event={"ID":"1c601173-b4e4-482a-a6e7-c5d7ff359a05","Type":"ContainerDied","Data":"4c6f96f05ab13c382074308c3e73b4c335cb41f3bf56a815ed468b366edd7be7"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.596129 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r4xf5" event={"ID":"1c601173-b4e4-482a-a6e7-c5d7ff359a05","Type":"ContainerStarted","Data":"bbe7b4d36102b439a702fdc3779d5cd83dd04f813bd20e58bd4fd93e0e1d1d75"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.597483 4580 generic.go:334] "Generic (PLEG): container finished" podID="5d42ba5a-3d41-4d57-9c38-c9a115ee139d" containerID="ebe319b0186fc31e0386d03451272307737e022576009ee882bd299a18729bc1" exitCode=0 Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.597552 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pgk6t" event={"ID":"5d42ba5a-3d41-4d57-9c38-c9a115ee139d","Type":"ContainerDied","Data":"ebe319b0186fc31e0386d03451272307737e022576009ee882bd299a18729bc1"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.597597 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pgk6t" event={"ID":"5d42ba5a-3d41-4d57-9c38-c9a115ee139d","Type":"ContainerStarted","Data":"eb58029aea70b0bdbac7cf6e4c8a8c57738c1fbe6ae2b98c2daa2d43fd324855"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.598709 4580 generic.go:334] "Generic (PLEG): container finished" podID="899c8c6d-d46b-4694-97c7-35a2f3e9ff45" containerID="1e1787d4224e9f482311b2fc6bbea9890c23a5d5d6662897289c0e8cb4806aad" exitCode=0 Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.598767 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bsrgf" event={"ID":"899c8c6d-d46b-4694-97c7-35a2f3e9ff45","Type":"ContainerDied","Data":"1e1787d4224e9f482311b2fc6bbea9890c23a5d5d6662897289c0e8cb4806aad"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.598785 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bsrgf" event={"ID":"899c8c6d-d46b-4694-97c7-35a2f3e9ff45","Type":"ContainerStarted","Data":"4d225698b64667e09f0c3272f431b2d457ae0a0f88fb8b848c5ed3b8e42986dd"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.600052 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x4v84" event={"ID":"0609dc3a-35c3-4be4-8625-aad80295f0ea","Type":"ContainerStarted","Data":"56fc84687ed6f83fe4fff0cebcd8b0cf50db99c770b28222a76a8d4217fbe00c"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.601318 4580 generic.go:334] "Generic (PLEG): container finished" podID="7aece380-6756-4ba8-8628-1cded9cd4005" containerID="32d7509faeb05df8322a013109b7939eb9eb224e6dc8e66eb443205e7d91533b" exitCode=0 Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.601377 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b1c4-account-create-update-sxf66" event={"ID":"7aece380-6756-4ba8-8628-1cded9cd4005","Type":"ContainerDied","Data":"32d7509faeb05df8322a013109b7939eb9eb224e6dc8e66eb443205e7d91533b"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.601402 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b1c4-account-create-update-sxf66" event={"ID":"7aece380-6756-4ba8-8628-1cded9cd4005","Type":"ContainerStarted","Data":"927ea15028c0a9d98b8223d4bdbea01e69982932e025323b7d3898f4d4893de5"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.602620 4580 generic.go:334] "Generic (PLEG): container finished" podID="23c7d555-fb69-4ebf-a44c-4132a0b4f3ee" containerID="f5c48aef4732f12bdd251a0162274ccf94b53ccb98c9c0456744ea06cd1a1dd9" exitCode=0 Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.602669 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c9e5-account-create-update-vj7p5" event={"ID":"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee","Type":"ContainerDied","Data":"f5c48aef4732f12bdd251a0162274ccf94b53ccb98c9c0456744ea06cd1a1dd9"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.602686 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c9e5-account-create-update-vj7p5" event={"ID":"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee","Type":"ContainerStarted","Data":"448e683cff70db2bc46d1b82b6148e0c73b139b372b2c5e37828db2fb23a60c5"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.606007 4580 generic.go:334] "Generic (PLEG): container finished" podID="7ac9c176-3792-4485-b000-a8cfc4c53f21" containerID="543d08e2b210315c80a3e52ef0e43b8fc07cc6c5b729abece9dcd7133630a163" exitCode=0 Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.606037 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8712-account-create-update-6c2pj" event={"ID":"7ac9c176-3792-4485-b000-a8cfc4c53f21","Type":"ContainerDied","Data":"543d08e2b210315c80a3e52ef0e43b8fc07cc6c5b729abece9dcd7133630a163"} Jan 12 13:21:13 crc kubenswrapper[4580]: I0112 13:21:13.606054 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8712-account-create-update-6c2pj" event={"ID":"7ac9c176-3792-4485-b000-a8cfc4c53f21","Type":"ContainerStarted","Data":"c4dd9359425caff65d5a42409ea1ca37a88ddf90f0eb84ffa3bc224ccabf794c"} Jan 12 13:21:14 crc kubenswrapper[4580]: I0112 13:21:14.615331 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k6zv" event={"ID":"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a","Type":"ContainerStarted","Data":"241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d"} Jan 12 13:21:14 crc kubenswrapper[4580]: I0112 13:21:14.642085 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6k6zv" podStartSLOduration=2.154460627 podStartE2EDuration="3.642072101s" podCreationTimestamp="2026-01-12 13:21:11 +0000 UTC" firstStartedPulling="2026-01-12 13:21:12.587517382 +0000 UTC m=+871.631736073" lastFinishedPulling="2026-01-12 13:21:14.075128857 +0000 UTC m=+873.119347547" observedRunningTime="2026-01-12 13:21:14.635367342 +0000 UTC m=+873.679586042" watchObservedRunningTime="2026-01-12 13:21:14.642072101 +0000 UTC m=+873.686290792" Jan 12 13:21:14 crc kubenswrapper[4580]: I0112 13:21:14.968779 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.077555 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xprbt\" (UniqueName: \"kubernetes.io/projected/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-kube-api-access-xprbt\") pod \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\" (UID: \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.077666 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-operator-scripts\") pod \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\" (UID: \"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.078498 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23c7d555-fb69-4ebf-a44c-4132a0b4f3ee" (UID: "23c7d555-fb69-4ebf-a44c-4132a0b4f3ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.082226 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.083730 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-kube-api-access-xprbt" (OuterVolumeSpecName: "kube-api-access-xprbt") pod "23c7d555-fb69-4ebf-a44c-4132a0b4f3ee" (UID: "23c7d555-fb69-4ebf-a44c-4132a0b4f3ee"). InnerVolumeSpecName "kube-api-access-xprbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.087801 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.097081 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.104843 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.122410 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.178888 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac9c176-3792-4485-b000-a8cfc4c53f21-operator-scripts\") pod \"7ac9c176-3792-4485-b000-a8cfc4c53f21\" (UID: \"7ac9c176-3792-4485-b000-a8cfc4c53f21\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179222 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnszb\" (UniqueName: \"kubernetes.io/projected/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-kube-api-access-hnszb\") pod \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\" (UID: \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179263 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp7s2\" (UniqueName: \"kubernetes.io/projected/7ac9c176-3792-4485-b000-a8cfc4c53f21-kube-api-access-xp7s2\") pod \"7ac9c176-3792-4485-b000-a8cfc4c53f21\" (UID: \"7ac9c176-3792-4485-b000-a8cfc4c53f21\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179296 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khm8m\" (UniqueName: \"kubernetes.io/projected/1c601173-b4e4-482a-a6e7-c5d7ff359a05-kube-api-access-khm8m\") pod \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\" (UID: \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179322 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g846l\" (UniqueName: \"kubernetes.io/projected/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-kube-api-access-g846l\") pod \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\" (UID: \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179367 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-operator-scripts\") pod \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\" (UID: \"5d42ba5a-3d41-4d57-9c38-c9a115ee139d\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179384 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac9c176-3792-4485-b000-a8cfc4c53f21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ac9c176-3792-4485-b000-a8cfc4c53f21" (UID: "7ac9c176-3792-4485-b000-a8cfc4c53f21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179464 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c601173-b4e4-482a-a6e7-c5d7ff359a05-operator-scripts\") pod \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\" (UID: \"1c601173-b4e4-482a-a6e7-c5d7ff359a05\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179483 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aece380-6756-4ba8-8628-1cded9cd4005-operator-scripts\") pod \"7aece380-6756-4ba8-8628-1cded9cd4005\" (UID: \"7aece380-6756-4ba8-8628-1cded9cd4005\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179532 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-operator-scripts\") pod \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\" (UID: \"899c8c6d-d46b-4694-97c7-35a2f3e9ff45\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179563 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5k5p\" (UniqueName: \"kubernetes.io/projected/7aece380-6756-4ba8-8628-1cded9cd4005-kube-api-access-p5k5p\") pod \"7aece380-6756-4ba8-8628-1cded9cd4005\" (UID: \"7aece380-6756-4ba8-8628-1cded9cd4005\") " Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179849 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xprbt\" (UniqueName: \"kubernetes.io/projected/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-kube-api-access-xprbt\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179861 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.179870 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac9c176-3792-4485-b000-a8cfc4c53f21-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.180374 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c601173-b4e4-482a-a6e7-c5d7ff359a05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c601173-b4e4-482a-a6e7-c5d7ff359a05" (UID: "1c601173-b4e4-482a-a6e7-c5d7ff359a05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.180624 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d42ba5a-3d41-4d57-9c38-c9a115ee139d" (UID: "5d42ba5a-3d41-4d57-9c38-c9a115ee139d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.180849 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aece380-6756-4ba8-8628-1cded9cd4005-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7aece380-6756-4ba8-8628-1cded9cd4005" (UID: "7aece380-6756-4ba8-8628-1cded9cd4005"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.181556 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "899c8c6d-d46b-4694-97c7-35a2f3e9ff45" (UID: "899c8c6d-d46b-4694-97c7-35a2f3e9ff45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.182666 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac9c176-3792-4485-b000-a8cfc4c53f21-kube-api-access-xp7s2" (OuterVolumeSpecName: "kube-api-access-xp7s2") pod "7ac9c176-3792-4485-b000-a8cfc4c53f21" (UID: "7ac9c176-3792-4485-b000-a8cfc4c53f21"). InnerVolumeSpecName "kube-api-access-xp7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.184334 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-kube-api-access-g846l" (OuterVolumeSpecName: "kube-api-access-g846l") pod "5d42ba5a-3d41-4d57-9c38-c9a115ee139d" (UID: "5d42ba5a-3d41-4d57-9c38-c9a115ee139d"). InnerVolumeSpecName "kube-api-access-g846l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.184417 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-kube-api-access-hnszb" (OuterVolumeSpecName: "kube-api-access-hnszb") pod "899c8c6d-d46b-4694-97c7-35a2f3e9ff45" (UID: "899c8c6d-d46b-4694-97c7-35a2f3e9ff45"). InnerVolumeSpecName "kube-api-access-hnszb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.184647 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c601173-b4e4-482a-a6e7-c5d7ff359a05-kube-api-access-khm8m" (OuterVolumeSpecName: "kube-api-access-khm8m") pod "1c601173-b4e4-482a-a6e7-c5d7ff359a05" (UID: "1c601173-b4e4-482a-a6e7-c5d7ff359a05"). InnerVolumeSpecName "kube-api-access-khm8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.187840 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aece380-6756-4ba8-8628-1cded9cd4005-kube-api-access-p5k5p" (OuterVolumeSpecName: "kube-api-access-p5k5p") pod "7aece380-6756-4ba8-8628-1cded9cd4005" (UID: "7aece380-6756-4ba8-8628-1cded9cd4005"). InnerVolumeSpecName "kube-api-access-p5k5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281246 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnszb\" (UniqueName: \"kubernetes.io/projected/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-kube-api-access-hnszb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281272 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp7s2\" (UniqueName: \"kubernetes.io/projected/7ac9c176-3792-4485-b000-a8cfc4c53f21-kube-api-access-xp7s2\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281284 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khm8m\" (UniqueName: \"kubernetes.io/projected/1c601173-b4e4-482a-a6e7-c5d7ff359a05-kube-api-access-khm8m\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281292 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g846l\" (UniqueName: \"kubernetes.io/projected/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-kube-api-access-g846l\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281300 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d42ba5a-3d41-4d57-9c38-c9a115ee139d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281308 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c601173-b4e4-482a-a6e7-c5d7ff359a05-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281314 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aece380-6756-4ba8-8628-1cded9cd4005-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281322 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899c8c6d-d46b-4694-97c7-35a2f3e9ff45-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.281329 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5k5p\" (UniqueName: \"kubernetes.io/projected/7aece380-6756-4ba8-8628-1cded9cd4005-kube-api-access-p5k5p\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.623184 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8712-account-create-update-6c2pj" event={"ID":"7ac9c176-3792-4485-b000-a8cfc4c53f21","Type":"ContainerDied","Data":"c4dd9359425caff65d5a42409ea1ca37a88ddf90f0eb84ffa3bc224ccabf794c"} Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.623227 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4dd9359425caff65d5a42409ea1ca37a88ddf90f0eb84ffa3bc224ccabf794c" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.623293 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8712-account-create-update-6c2pj" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.626407 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r4xf5" event={"ID":"1c601173-b4e4-482a-a6e7-c5d7ff359a05","Type":"ContainerDied","Data":"bbe7b4d36102b439a702fdc3779d5cd83dd04f813bd20e58bd4fd93e0e1d1d75"} Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.626451 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbe7b4d36102b439a702fdc3779d5cd83dd04f813bd20e58bd4fd93e0e1d1d75" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.626417 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r4xf5" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.629790 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bsrgf" event={"ID":"899c8c6d-d46b-4694-97c7-35a2f3e9ff45","Type":"ContainerDied","Data":"4d225698b64667e09f0c3272f431b2d457ae0a0f88fb8b848c5ed3b8e42986dd"} Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.630025 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d225698b64667e09f0c3272f431b2d457ae0a0f88fb8b848c5ed3b8e42986dd" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.630086 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bsrgf" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.634211 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pgk6t" event={"ID":"5d42ba5a-3d41-4d57-9c38-c9a115ee139d","Type":"ContainerDied","Data":"eb58029aea70b0bdbac7cf6e4c8a8c57738c1fbe6ae2b98c2daa2d43fd324855"} Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.634246 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb58029aea70b0bdbac7cf6e4c8a8c57738c1fbe6ae2b98c2daa2d43fd324855" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.634229 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pgk6t" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.635884 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b1c4-account-create-update-sxf66" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.635905 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b1c4-account-create-update-sxf66" event={"ID":"7aece380-6756-4ba8-8628-1cded9cd4005","Type":"ContainerDied","Data":"927ea15028c0a9d98b8223d4bdbea01e69982932e025323b7d3898f4d4893de5"} Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.635934 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="927ea15028c0a9d98b8223d4bdbea01e69982932e025323b7d3898f4d4893de5" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.638235 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c9e5-account-create-update-vj7p5" event={"ID":"23c7d555-fb69-4ebf-a44c-4132a0b4f3ee","Type":"ContainerDied","Data":"448e683cff70db2bc46d1b82b6148e0c73b139b372b2c5e37828db2fb23a60c5"} Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.638269 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="448e683cff70db2bc46d1b82b6148e0c73b139b372b2c5e37828db2fb23a60c5" Jan 12 13:21:15 crc kubenswrapper[4580]: I0112 13:21:15.638272 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c9e5-account-create-update-vj7p5" Jan 12 13:21:16 crc kubenswrapper[4580]: I0112 13:21:16.268647 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:16 crc kubenswrapper[4580]: I0112 13:21:16.313880 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ngwp9"] Jan 12 13:21:16 crc kubenswrapper[4580]: I0112 13:21:16.314082 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" podUID="c870daa1-9b22-48e8-bb1b-a9f314328301" containerName="dnsmasq-dns" containerID="cri-o://64e5422b6f887a38f9736f4c2c021b8b51cdc3582b0f4a7119fddfb4207f2419" gracePeriod=10 Jan 12 13:21:16 crc kubenswrapper[4580]: I0112 13:21:16.646392 4580 generic.go:334] "Generic (PLEG): container finished" podID="c870daa1-9b22-48e8-bb1b-a9f314328301" containerID="64e5422b6f887a38f9736f4c2c021b8b51cdc3582b0f4a7119fddfb4207f2419" exitCode=0 Jan 12 13:21:16 crc kubenswrapper[4580]: I0112 13:21:16.646440 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" event={"ID":"c870daa1-9b22-48e8-bb1b-a9f314328301","Type":"ContainerDied","Data":"64e5422b6f887a38f9736f4c2c021b8b51cdc3582b0f4a7119fddfb4207f2419"} Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.798207 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.826454 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-dns-svc\") pod \"c870daa1-9b22-48e8-bb1b-a9f314328301\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.826531 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-nb\") pod \"c870daa1-9b22-48e8-bb1b-a9f314328301\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.826551 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-config\") pod \"c870daa1-9b22-48e8-bb1b-a9f314328301\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.826692 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn9r5\" (UniqueName: \"kubernetes.io/projected/c870daa1-9b22-48e8-bb1b-a9f314328301-kube-api-access-pn9r5\") pod \"c870daa1-9b22-48e8-bb1b-a9f314328301\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.826783 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-sb\") pod \"c870daa1-9b22-48e8-bb1b-a9f314328301\" (UID: \"c870daa1-9b22-48e8-bb1b-a9f314328301\") " Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.833927 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c870daa1-9b22-48e8-bb1b-a9f314328301-kube-api-access-pn9r5" (OuterVolumeSpecName: "kube-api-access-pn9r5") pod "c870daa1-9b22-48e8-bb1b-a9f314328301" (UID: "c870daa1-9b22-48e8-bb1b-a9f314328301"). InnerVolumeSpecName "kube-api-access-pn9r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.857527 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c870daa1-9b22-48e8-bb1b-a9f314328301" (UID: "c870daa1-9b22-48e8-bb1b-a9f314328301"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.861398 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-config" (OuterVolumeSpecName: "config") pod "c870daa1-9b22-48e8-bb1b-a9f314328301" (UID: "c870daa1-9b22-48e8-bb1b-a9f314328301"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.863404 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c870daa1-9b22-48e8-bb1b-a9f314328301" (UID: "c870daa1-9b22-48e8-bb1b-a9f314328301"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.866177 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c870daa1-9b22-48e8-bb1b-a9f314328301" (UID: "c870daa1-9b22-48e8-bb1b-a9f314328301"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.929044 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn9r5\" (UniqueName: \"kubernetes.io/projected/c870daa1-9b22-48e8-bb1b-a9f314328301-kube-api-access-pn9r5\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.929073 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.929084 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.929092 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:17 crc kubenswrapper[4580]: I0112 13:21:17.929099 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c870daa1-9b22-48e8-bb1b-a9f314328301-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:18 crc kubenswrapper[4580]: I0112 13:21:18.661942 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x4v84" event={"ID":"0609dc3a-35c3-4be4-8625-aad80295f0ea","Type":"ContainerStarted","Data":"21e58b652a63beb9a4dd53f0cfc7d09efecf216a7ca5e5f3ac4c4d55b42999e2"} Jan 12 13:21:18 crc kubenswrapper[4580]: I0112 13:21:18.663819 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" event={"ID":"c870daa1-9b22-48e8-bb1b-a9f314328301","Type":"ContainerDied","Data":"476d5fffb8ac13c6e87d82fcbd179a0cd175384b8e6bf1640ff1c1673f597b2f"} Jan 12 13:21:18 crc kubenswrapper[4580]: I0112 13:21:18.663865 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ngwp9" Jan 12 13:21:18 crc kubenswrapper[4580]: I0112 13:21:18.663874 4580 scope.go:117] "RemoveContainer" containerID="64e5422b6f887a38f9736f4c2c021b8b51cdc3582b0f4a7119fddfb4207f2419" Jan 12 13:21:18 crc kubenswrapper[4580]: I0112 13:21:18.681820 4580 scope.go:117] "RemoveContainer" containerID="ecd68b56e6f94c7312824d59dba7f2bbb7e04ae449ddca4fa70d580369a57cd2" Jan 12 13:21:18 crc kubenswrapper[4580]: I0112 13:21:18.690301 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x4v84" podStartSLOduration=2.232970972 podStartE2EDuration="6.690284765s" podCreationTimestamp="2026-01-12 13:21:12 +0000 UTC" firstStartedPulling="2026-01-12 13:21:13.13254128 +0000 UTC m=+872.176759970" lastFinishedPulling="2026-01-12 13:21:17.589855074 +0000 UTC m=+876.634073763" observedRunningTime="2026-01-12 13:21:18.67918036 +0000 UTC m=+877.723399051" watchObservedRunningTime="2026-01-12 13:21:18.690284765 +0000 UTC m=+877.734503455" Jan 12 13:21:18 crc kubenswrapper[4580]: I0112 13:21:18.693421 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ngwp9"] Jan 12 13:21:18 crc kubenswrapper[4580]: I0112 13:21:18.697942 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ngwp9"] Jan 12 13:21:19 crc kubenswrapper[4580]: I0112 13:21:19.289799 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c870daa1-9b22-48e8-bb1b-a9f314328301" path="/var/lib/kubelet/pods/c870daa1-9b22-48e8-bb1b-a9f314328301/volumes" Jan 12 13:21:19 crc kubenswrapper[4580]: I0112 13:21:19.671504 4580 generic.go:334] "Generic (PLEG): container finished" podID="0609dc3a-35c3-4be4-8625-aad80295f0ea" containerID="21e58b652a63beb9a4dd53f0cfc7d09efecf216a7ca5e5f3ac4c4d55b42999e2" exitCode=0 Jan 12 13:21:19 crc kubenswrapper[4580]: I0112 13:21:19.671540 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x4v84" event={"ID":"0609dc3a-35c3-4be4-8625-aad80295f0ea","Type":"ContainerDied","Data":"21e58b652a63beb9a4dd53f0cfc7d09efecf216a7ca5e5f3ac4c4d55b42999e2"} Jan 12 13:21:20 crc kubenswrapper[4580]: I0112 13:21:20.931517 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:20 crc kubenswrapper[4580]: I0112 13:21:20.992690 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5z8\" (UniqueName: \"kubernetes.io/projected/0609dc3a-35c3-4be4-8625-aad80295f0ea-kube-api-access-zq5z8\") pod \"0609dc3a-35c3-4be4-8625-aad80295f0ea\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " Jan 12 13:21:20 crc kubenswrapper[4580]: I0112 13:21:20.992818 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-config-data\") pod \"0609dc3a-35c3-4be4-8625-aad80295f0ea\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " Jan 12 13:21:20 crc kubenswrapper[4580]: I0112 13:21:20.992890 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-combined-ca-bundle\") pod \"0609dc3a-35c3-4be4-8625-aad80295f0ea\" (UID: \"0609dc3a-35c3-4be4-8625-aad80295f0ea\") " Jan 12 13:21:20 crc kubenswrapper[4580]: I0112 13:21:20.998232 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0609dc3a-35c3-4be4-8625-aad80295f0ea-kube-api-access-zq5z8" (OuterVolumeSpecName: "kube-api-access-zq5z8") pod "0609dc3a-35c3-4be4-8625-aad80295f0ea" (UID: "0609dc3a-35c3-4be4-8625-aad80295f0ea"). InnerVolumeSpecName "kube-api-access-zq5z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.012494 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0609dc3a-35c3-4be4-8625-aad80295f0ea" (UID: "0609dc3a-35c3-4be4-8625-aad80295f0ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.023236 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-config-data" (OuterVolumeSpecName: "config-data") pod "0609dc3a-35c3-4be4-8625-aad80295f0ea" (UID: "0609dc3a-35c3-4be4-8625-aad80295f0ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.094725 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq5z8\" (UniqueName: \"kubernetes.io/projected/0609dc3a-35c3-4be4-8625-aad80295f0ea-kube-api-access-zq5z8\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.094750 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.094759 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0609dc3a-35c3-4be4-8625-aad80295f0ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.684549 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x4v84" event={"ID":"0609dc3a-35c3-4be4-8625-aad80295f0ea","Type":"ContainerDied","Data":"56fc84687ed6f83fe4fff0cebcd8b0cf50db99c770b28222a76a8d4217fbe00c"} Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.684787 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56fc84687ed6f83fe4fff0cebcd8b0cf50db99c770b28222a76a8d4217fbe00c" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.684635 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x4v84" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.699584 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.699675 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.735241 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.895848 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q6qcr"] Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899254 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c601173-b4e4-482a-a6e7-c5d7ff359a05" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899280 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c601173-b4e4-482a-a6e7-c5d7ff359a05" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899292 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac9c176-3792-4485-b000-a8cfc4c53f21" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899298 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac9c176-3792-4485-b000-a8cfc4c53f21" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899309 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c870daa1-9b22-48e8-bb1b-a9f314328301" containerName="init" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899315 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="c870daa1-9b22-48e8-bb1b-a9f314328301" containerName="init" Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899329 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0609dc3a-35c3-4be4-8625-aad80295f0ea" containerName="keystone-db-sync" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899334 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0609dc3a-35c3-4be4-8625-aad80295f0ea" containerName="keystone-db-sync" Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899354 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aece380-6756-4ba8-8628-1cded9cd4005" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899360 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aece380-6756-4ba8-8628-1cded9cd4005" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899368 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d42ba5a-3d41-4d57-9c38-c9a115ee139d" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899375 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d42ba5a-3d41-4d57-9c38-c9a115ee139d" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899391 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c7d555-fb69-4ebf-a44c-4132a0b4f3ee" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899396 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c7d555-fb69-4ebf-a44c-4132a0b4f3ee" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899408 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c870daa1-9b22-48e8-bb1b-a9f314328301" containerName="dnsmasq-dns" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899414 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="c870daa1-9b22-48e8-bb1b-a9f314328301" containerName="dnsmasq-dns" Jan 12 13:21:21 crc kubenswrapper[4580]: E0112 13:21:21.899421 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899c8c6d-d46b-4694-97c7-35a2f3e9ff45" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899426 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="899c8c6d-d46b-4694-97c7-35a2f3e9ff45" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899591 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac9c176-3792-4485-b000-a8cfc4c53f21" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899601 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c7d555-fb69-4ebf-a44c-4132a0b4f3ee" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899607 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="899c8c6d-d46b-4694-97c7-35a2f3e9ff45" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899617 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d42ba5a-3d41-4d57-9c38-c9a115ee139d" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899626 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c601173-b4e4-482a-a6e7-c5d7ff359a05" containerName="mariadb-database-create" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899634 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="c870daa1-9b22-48e8-bb1b-a9f314328301" containerName="dnsmasq-dns" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899652 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0609dc3a-35c3-4be4-8625-aad80295f0ea" containerName="keystone-db-sync" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.899659 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aece380-6756-4ba8-8628-1cded9cd4005" containerName="mariadb-account-create-update" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.900405 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.907929 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q6qcr"] Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.945454 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hcdf4"] Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.949405 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.951396 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.951426 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.951642 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-64jgc" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.951757 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.954953 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 12 13:21:21 crc kubenswrapper[4580]: I0112 13:21:21.955924 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hcdf4"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.009630 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.009675 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.009716 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bqsx\" (UniqueName: \"kubernetes.io/projected/a838d8bb-d607-4309-a666-8da1387631fc-kube-api-access-9bqsx\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.009739 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.009760 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-fernet-keys\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.009944 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-credential-keys\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.010004 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-combined-ca-bundle\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.010160 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-scripts\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.010241 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-config-data\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.010266 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-config\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.010339 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxsdk\" (UniqueName: \"kubernetes.io/projected/f480553f-1da5-4142-bb99-429c8cefc6de-kube-api-access-fxsdk\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.010396 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.067142 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.068836 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.071582 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.071626 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.073972 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bb57d5f45-mb7xb"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.075324 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.079068 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.079166 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.080570 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pd9k8" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.084380 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.100374 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112091 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-scripts\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112152 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-run-httpd\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112159 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bb57d5f45-mb7xb"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112175 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-config-data\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112254 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-config\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112283 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-log-httpd\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112304 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-config-data\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112383 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxsdk\" (UniqueName: \"kubernetes.io/projected/f480553f-1da5-4142-bb99-429c8cefc6de-kube-api-access-fxsdk\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112428 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-scripts\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112461 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112476 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112511 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112541 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112594 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bqsx\" (UniqueName: \"kubernetes.io/projected/a838d8bb-d607-4309-a666-8da1387631fc-kube-api-access-9bqsx\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112620 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112647 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-fernet-keys\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112694 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-credential-keys\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112713 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-combined-ca-bundle\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112729 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcxn\" (UniqueName: \"kubernetes.io/projected/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-kube-api-access-jmcxn\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.112766 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.113684 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-config\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.113965 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.117840 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.118173 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-fernet-keys\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.120620 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-config-data\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.122339 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-scripts\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.127232 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.131926 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.150311 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-combined-ca-bundle\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.150639 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-credential-keys\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.153572 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bqsx\" (UniqueName: \"kubernetes.io/projected/a838d8bb-d607-4309-a666-8da1387631fc-kube-api-access-9bqsx\") pod \"keystone-bootstrap-hcdf4\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.158484 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxsdk\" (UniqueName: \"kubernetes.io/projected/f480553f-1da5-4142-bb99-429c8cefc6de-kube-api-access-fxsdk\") pod \"dnsmasq-dns-5fdbfbc95f-q6qcr\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.214810 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6868cd5fd5-ct7dn"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.214999 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.215908 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82d9d66d-ff92-4164-96a9-c82a919cce00-horizon-secret-key\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.215973 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcxn\" (UniqueName: \"kubernetes.io/projected/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-kube-api-access-jmcxn\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.215998 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-config-data\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216024 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216064 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d9d66d-ff92-4164-96a9-c82a919cce00-logs\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216088 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-run-httpd\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216156 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-log-httpd\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216186 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-config-data\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216252 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-scripts\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216274 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-scripts\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216295 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.216315 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxld\" (UniqueName: \"kubernetes.io/projected/82d9d66d-ff92-4164-96a9-c82a919cce00-kube-api-access-qrxld\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.221315 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-run-httpd\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.226206 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.227653 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-log-httpd\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.229542 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-scripts\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.229562 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.243655 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.247942 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcxn\" (UniqueName: \"kubernetes.io/projected/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-kube-api-access-jmcxn\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.258172 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6868cd5fd5-ct7dn"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.269046 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.273383 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-mmlfs"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.274899 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.300699 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.300822 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tktfx" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.300819 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.303929 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-config-data\") pod \"ceilometer-0\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319064 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-scripts\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319118 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxld\" (UniqueName: \"kubernetes.io/projected/82d9d66d-ff92-4164-96a9-c82a919cce00-kube-api-access-qrxld\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319162 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-horizon-secret-key\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319184 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-db-sync-config-data\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319200 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-scripts\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319230 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82d9d66d-ff92-4164-96a9-c82a919cce00-horizon-secret-key\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319248 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/702612c1-966a-4293-b0dc-05901a325794-etc-machine-id\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319262 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-config-data\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.319281 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zx8j\" (UniqueName: \"kubernetes.io/projected/702612c1-966a-4293-b0dc-05901a325794-kube-api-access-8zx8j\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.331750 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-combined-ca-bundle\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.331837 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-config-data\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.331872 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-logs\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.331897 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6bcr\" (UniqueName: \"kubernetes.io/projected/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-kube-api-access-n6bcr\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.331937 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-scripts\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.331960 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-config-data\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.331980 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d9d66d-ff92-4164-96a9-c82a919cce00-logs\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.332412 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d9d66d-ff92-4164-96a9-c82a919cce00-logs\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.324579 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-scripts\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.333379 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-config-data\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.338538 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-blrls"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.339657 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.343181 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.343549 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nnq6f" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.345190 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gdxqz"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.348610 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.350443 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.351277 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4tjb6" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.351494 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.351786 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.354082 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.354391 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.354520 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.354631 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.354737 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6rzjz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.357656 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gdxqz"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.358264 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxld\" (UniqueName: \"kubernetes.io/projected/82d9d66d-ff92-4164-96a9-c82a919cce00-kube-api-access-qrxld\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.358320 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82d9d66d-ff92-4164-96a9-c82a919cce00-horizon-secret-key\") pod \"horizon-bb57d5f45-mb7xb\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.364851 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-blrls"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.371128 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mmlfs"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.378562 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.386156 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.391190 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433089 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-logs\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433155 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6bcr\" (UniqueName: \"kubernetes.io/projected/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-kube-api-access-n6bcr\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433209 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-scripts\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433237 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-config-data\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433275 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433302 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-db-sync-config-data\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433397 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433440 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433513 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-combined-ca-bundle\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433527 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdbk\" (UniqueName: \"kubernetes.io/projected/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-kube-api-access-fzdbk\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433557 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433573 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-horizon-secret-key\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433598 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-db-sync-config-data\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433612 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433633 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-scripts\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433647 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-combined-ca-bundle\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433683 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433723 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j5n4\" (UniqueName: \"kubernetes.io/projected/eb0ed855-adbc-497b-9bc5-92330edbb8c8-kube-api-access-7j5n4\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433738 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/702612c1-966a-4293-b0dc-05901a325794-etc-machine-id\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433756 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-config-data\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433770 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zx8j\" (UniqueName: \"kubernetes.io/projected/702612c1-966a-4293-b0dc-05901a325794-kube-api-access-8zx8j\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433787 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-combined-ca-bundle\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433833 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433852 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcfl\" (UniqueName: \"kubernetes.io/projected/2e9e0fa5-2756-4650-9199-7249ca8a1650-kube-api-access-6qcfl\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.433876 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-config\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.434259 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-logs\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.436710 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/702612c1-966a-4293-b0dc-05901a325794-etc-machine-id\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.439310 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-config-data\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.445482 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-scripts\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.450396 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-scripts\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.453406 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-combined-ca-bundle\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.455017 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-horizon-secret-key\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.455568 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-config-data\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.458615 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6bcr\" (UniqueName: \"kubernetes.io/projected/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-kube-api-access-n6bcr\") pod \"horizon-6868cd5fd5-ct7dn\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.458928 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-db-sync-config-data\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.467559 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s9pd9"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.470716 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zx8j\" (UniqueName: \"kubernetes.io/projected/702612c1-966a-4293-b0dc-05901a325794-kube-api-access-8zx8j\") pod \"cinder-db-sync-mmlfs\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.470906 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.475907 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.476085 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vn8vt" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.476262 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.485342 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q6qcr"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.495432 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s9pd9"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.511154 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.512478 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.516285 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.516663 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.535893 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.536422 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.536674 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-combined-ca-bundle\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537444 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzdbk\" (UniqueName: \"kubernetes.io/projected/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-kube-api-access-fzdbk\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537080 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537569 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537680 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537711 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-combined-ca-bundle\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537752 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537779 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-combined-ca-bundle\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537806 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-scripts\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537827 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j5n4\" (UniqueName: \"kubernetes.io/projected/eb0ed855-adbc-497b-9bc5-92330edbb8c8-kube-api-access-7j5n4\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537878 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbdm\" (UniqueName: \"kubernetes.io/projected/edd41e34-6733-4a77-b99b-3ab0895b124a-kube-api-access-xhbdm\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537932 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537949 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-config-data\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537964 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcfl\" (UniqueName: \"kubernetes.io/projected/2e9e0fa5-2756-4650-9199-7249ca8a1650-kube-api-access-6qcfl\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.537978 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-config\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.538078 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.538123 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-db-sync-config-data\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.538144 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd41e34-6733-4a77-b99b-3ab0895b124a-logs\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.539442 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.540300 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-s5ztm"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.540518 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.544758 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.547653 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.549493 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-combined-ca-bundle\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.554948 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-db-sync-config-data\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.555399 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.556712 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-config\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.557753 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.557972 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-combined-ca-bundle\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.559396 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzdbk\" (UniqueName: \"kubernetes.io/projected/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-kube-api-access-fzdbk\") pod \"barbican-db-sync-blrls\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.559979 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.563282 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcfl\" (UniqueName: \"kubernetes.io/projected/2e9e0fa5-2756-4650-9199-7249ca8a1650-kube-api-access-6qcfl\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.565582 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j5n4\" (UniqueName: \"kubernetes.io/projected/eb0ed855-adbc-497b-9bc5-92330edbb8c8-kube-api-access-7j5n4\") pod \"neutron-db-sync-gdxqz\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.575666 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.578096 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-s5ztm"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.593825 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.626641 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jtmvw"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.628473 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.637579 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jtmvw"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.642204 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-config-data\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.642285 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd41e34-6733-4a77-b99b-3ab0895b124a-logs\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.642381 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-combined-ca-bundle\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.642400 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-scripts\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.642419 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhbdm\" (UniqueName: \"kubernetes.io/projected/edd41e34-6733-4a77-b99b-3ab0895b124a-kube-api-access-xhbdm\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.647505 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd41e34-6733-4a77-b99b-3ab0895b124a-logs\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.650636 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-scripts\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.651888 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-config-data\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.657166 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.659026 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-combined-ca-bundle\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.679698 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhbdm\" (UniqueName: \"kubernetes.io/projected/edd41e34-6733-4a77-b99b-3ab0895b124a-kube-api-access-xhbdm\") pod \"placement-db-sync-s9pd9\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.698313 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.706660 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-blrls" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.726591 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.743907 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj2tz\" (UniqueName: \"kubernetes.io/projected/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-kube-api-access-jj2tz\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744302 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744380 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744422 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p8fd\" (UniqueName: \"kubernetes.io/projected/cd9391a2-339e-4eed-84df-164e7eae3e0c-kube-api-access-7p8fd\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744515 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-scripts\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744552 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744586 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-config-data\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744609 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744695 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-logs\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744726 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744837 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-catalog-content\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.744896 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.745408 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.745443 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxf4t\" (UniqueName: \"kubernetes.io/projected/54193329-eff6-4ec8-874c-722003442682-kube-api-access-jxf4t\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.745493 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-config\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.745523 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.745564 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-utilities\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.750698 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.756261 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.798714 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847499 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847540 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847560 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p8fd\" (UniqueName: \"kubernetes.io/projected/cd9391a2-339e-4eed-84df-164e7eae3e0c-kube-api-access-7p8fd\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847596 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-scripts\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847628 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847647 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-config-data\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847663 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847684 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-logs\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847701 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847740 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-catalog-content\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847788 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847815 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847829 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxf4t\" (UniqueName: \"kubernetes.io/projected/54193329-eff6-4ec8-874c-722003442682-kube-api-access-jxf4t\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847851 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-config\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847878 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847903 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-utilities\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.847928 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj2tz\" (UniqueName: \"kubernetes.io/projected/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-kube-api-access-jj2tz\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.848514 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-logs\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.849803 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.850734 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.862174 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-scripts\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.864055 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.866909 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj2tz\" (UniqueName: \"kubernetes.io/projected/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-kube-api-access-jj2tz\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.868336 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.869046 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-config-data\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.869233 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.879374 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-catalog-content\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.885499 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.886096 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-config\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.913160 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.921122 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.926765 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.931855 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p8fd\" (UniqueName: \"kubernetes.io/projected/cd9391a2-339e-4eed-84df-164e7eae3e0c-kube-api-access-7p8fd\") pod \"dnsmasq-dns-6f6f8cb849-s5ztm\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.937390 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-utilities\") pod \"community-operators-jtmvw\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.949052 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q6qcr"] Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.949640 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxf4t\" (UniqueName: \"kubernetes.io/projected/54193329-eff6-4ec8-874c-722003442682-kube-api-access-jxf4t\") pod \"glance-default-external-api-0\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.971494 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:22 crc kubenswrapper[4580]: I0112 13:21:22.983269 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hcdf4"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.138536 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.181182 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bb57d5f45-mb7xb"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.186995 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.189949 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.419543 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6868cd5fd5-ct7dn"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.437112 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mmlfs"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.568217 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s9pd9"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.580812 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gdxqz"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.582808 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-blrls"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.647049 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.708081 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hcdf4" event={"ID":"a838d8bb-d607-4309-a666-8da1387631fc","Type":"ContainerStarted","Data":"a984ef6d9319849bc0b8182a27d40cf5f9d1de49e4b0d1868e660d6412fd6006"} Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.709415 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" event={"ID":"f480553f-1da5-4142-bb99-429c8cefc6de","Type":"ContainerStarted","Data":"fea6ad4b8a64eaa4683029c6d5107e353dcdbeeb879280a35adef178e2d51b58"} Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.727249 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jtmvw"] Jan 12 13:21:23 crc kubenswrapper[4580]: I0112 13:21:23.985738 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.054083 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6868cd5fd5-ct7dn"] Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.076022 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.098223 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74f4fd9547-lhpct"] Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.107511 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.117090 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74f4fd9547-lhpct"] Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.124477 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.192923 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca18973-3724-49c5-b8c4-cf6beb66c288-logs\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.193045 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkj25\" (UniqueName: \"kubernetes.io/projected/fca18973-3724-49c5-b8c4-cf6beb66c288-kube-api-access-kkj25\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.193163 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-config-data\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.193192 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-scripts\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.193221 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fca18973-3724-49c5-b8c4-cf6beb66c288-horizon-secret-key\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.294407 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fca18973-3724-49c5-b8c4-cf6beb66c288-horizon-secret-key\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.294547 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca18973-3724-49c5-b8c4-cf6beb66c288-logs\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.294648 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkj25\" (UniqueName: \"kubernetes.io/projected/fca18973-3724-49c5-b8c4-cf6beb66c288-kube-api-access-kkj25\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.294710 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-config-data\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.294746 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-scripts\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.295456 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-scripts\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.295969 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca18973-3724-49c5-b8c4-cf6beb66c288-logs\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.296356 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-config-data\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.299720 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fca18973-3724-49c5-b8c4-cf6beb66c288-horizon-secret-key\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.312445 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkj25\" (UniqueName: \"kubernetes.io/projected/fca18973-3724-49c5-b8c4-cf6beb66c288-kube-api-access-kkj25\") pod \"horizon-74f4fd9547-lhpct\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:24 crc kubenswrapper[4580]: I0112 13:21:24.428963 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:25 crc kubenswrapper[4580]: I0112 13:21:25.158613 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k6zv"] Jan 12 13:21:25 crc kubenswrapper[4580]: I0112 13:21:25.737894 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6k6zv" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerName="registry-server" containerID="cri-o://241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d" gracePeriod=2 Jan 12 13:21:26 crc kubenswrapper[4580]: W0112 13:21:26.287618 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c1ab4b_8921_4f4a_88dd_adf6e224d62c.slice/crio-ea4fd92280dbf60298bec5450dfed75dd984bde99b64d228e1e725d11a02e6fa WatchSource:0}: Error finding container ea4fd92280dbf60298bec5450dfed75dd984bde99b64d228e1e725d11a02e6fa: Status 404 returned error can't find the container with id ea4fd92280dbf60298bec5450dfed75dd984bde99b64d228e1e725d11a02e6fa Jan 12 13:21:26 crc kubenswrapper[4580]: W0112 13:21:26.293546 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod702612c1_966a_4293_b0dc_05901a325794.slice/crio-91816bd55f430375591c93aba4ab093551d724a1ef11aad768b19039a8662d4b WatchSource:0}: Error finding container 91816bd55f430375591c93aba4ab093551d724a1ef11aad768b19039a8662d4b: Status 404 returned error can't find the container with id 91816bd55f430375591c93aba4ab093551d724a1ef11aad768b19039a8662d4b Jan 12 13:21:26 crc kubenswrapper[4580]: W0112 13:21:26.295365 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cbf5c7d_9220_43a8_9015_1c52d0c3855f.slice/crio-63323b76e030c14c9c93459ca74872a9333ca94c3200068c7c987e21badd1692 WatchSource:0}: Error finding container 63323b76e030c14c9c93459ca74872a9333ca94c3200068c7c987e21badd1692: Status 404 returned error can't find the container with id 63323b76e030c14c9c93459ca74872a9333ca94c3200068c7c987e21badd1692 Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.690271 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.737412 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-catalog-content\") pod \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.739241 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8dq2\" (UniqueName: \"kubernetes.io/projected/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-kube-api-access-j8dq2\") pod \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.739622 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-utilities\") pod \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\" (UID: \"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a\") " Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.741025 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-utilities" (OuterVolumeSpecName: "utilities") pod "701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" (UID: "701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.745255 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-kube-api-access-j8dq2" (OuterVolumeSpecName: "kube-api-access-j8dq2") pod "701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" (UID: "701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a"). InnerVolumeSpecName "kube-api-access-j8dq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.747887 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e9e0fa5-2756-4650-9199-7249ca8a1650","Type":"ContainerStarted","Data":"7d9ef3e35a2f06697b8c428fe22d8d54243ff0fc4a556fbcbfa17703089a0e6b"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.749082 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6868cd5fd5-ct7dn" event={"ID":"db7a06f9-1a77-4a21-ac05-0c73655fa8d0","Type":"ContainerStarted","Data":"5775d1544fd0d7d266e9ba8c60d832e0e10fa84af4c896d39519e2bc458ca743"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.753063 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" (UID: "701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.754558 4580 generic.go:334] "Generic (PLEG): container finished" podID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerID="90f5149edfeb6f80aefd3387cd72cce809098bb0aab1765107a48736927a6a28" exitCode=0 Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.754631 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtmvw" event={"ID":"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74","Type":"ContainerDied","Data":"90f5149edfeb6f80aefd3387cd72cce809098bb0aab1765107a48736927a6a28"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.754676 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtmvw" event={"ID":"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74","Type":"ContainerStarted","Data":"a781dfbe2243eeb3d30a8ca95dcca0f210af90035333785c8d701096ee6ed374"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.758370 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mmlfs" event={"ID":"702612c1-966a-4293-b0dc-05901a325794","Type":"ContainerStarted","Data":"91816bd55f430375591c93aba4ab093551d724a1ef11aad768b19039a8662d4b"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.759807 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s9pd9" event={"ID":"edd41e34-6733-4a77-b99b-3ab0895b124a","Type":"ContainerStarted","Data":"68f70eca28d9d0c3dd9e815553b9c89748d836bb3ed8a8c899c339dca792c27f"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.770845 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84c1ab4b-8921-4f4a-88dd-adf6e224d62c","Type":"ContainerStarted","Data":"ea4fd92280dbf60298bec5450dfed75dd984bde99b64d228e1e725d11a02e6fa"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.774465 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gdxqz" event={"ID":"eb0ed855-adbc-497b-9bc5-92330edbb8c8","Type":"ContainerStarted","Data":"766a0e0bcaa4a21deed755b346bffab06e1b15b9f8f13b5b1b9af81e66d8e506"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.774492 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gdxqz" event={"ID":"eb0ed855-adbc-497b-9bc5-92330edbb8c8","Type":"ContainerStarted","Data":"0ae8fb3c4b1ac8d30be940f0b0bc38c1e4e54fe631d42101a4a9ad03c49d1daf"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.788549 4580 generic.go:334] "Generic (PLEG): container finished" podID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerID="241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d" exitCode=0 Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.788638 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k6zv" event={"ID":"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a","Type":"ContainerDied","Data":"241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.788681 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6k6zv" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.788697 4580 scope.go:117] "RemoveContainer" containerID="241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.788682 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k6zv" event={"ID":"701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a","Type":"ContainerDied","Data":"d2adab61128f3044e831ab44cfc65204c14e9772746ee778be82c41d9ca42e7d"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.790023 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-blrls" event={"ID":"3cbf5c7d-9220-43a8-9015-1c52d0c3855f","Type":"ContainerStarted","Data":"63323b76e030c14c9c93459ca74872a9333ca94c3200068c7c987e21badd1692"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.791349 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gdxqz" podStartSLOduration=4.791329292 podStartE2EDuration="4.791329292s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:26.787109645 +0000 UTC m=+885.831328335" watchObservedRunningTime="2026-01-12 13:21:26.791329292 +0000 UTC m=+885.835547981" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.791693 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hcdf4" event={"ID":"a838d8bb-d607-4309-a666-8da1387631fc","Type":"ContainerStarted","Data":"74c1e9a109071b596d005186baf53e456b93891ceb51948650dc0ba9c9fdd577"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.796702 4580 generic.go:334] "Generic (PLEG): container finished" podID="f480553f-1da5-4142-bb99-429c8cefc6de" containerID="1dd525a229924e49bb8b3f3e5549a74c78e5d8369b4e2dab2c437a10dfcd0184" exitCode=0 Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.796761 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" event={"ID":"f480553f-1da5-4142-bb99-429c8cefc6de","Type":"ContainerDied","Data":"1dd525a229924e49bb8b3f3e5549a74c78e5d8369b4e2dab2c437a10dfcd0184"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.799441 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bb57d5f45-mb7xb" event={"ID":"82d9d66d-ff92-4164-96a9-c82a919cce00","Type":"ContainerStarted","Data":"bf259687b6c05e8e9b9feb7e58f94d294196f27dc3be954572d04e9733b83001"} Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.809622 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hcdf4" podStartSLOduration=5.809610455 podStartE2EDuration="5.809610455s" podCreationTimestamp="2026-01-12 13:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:26.805370862 +0000 UTC m=+885.849589551" watchObservedRunningTime="2026-01-12 13:21:26.809610455 +0000 UTC m=+885.853829145" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.842698 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.842729 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.842742 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8dq2\" (UniqueName: \"kubernetes.io/projected/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a-kube-api-access-j8dq2\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:26 crc kubenswrapper[4580]: I0112 13:21:26.843501 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74f4fd9547-lhpct"] Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.853210 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k6zv"] Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.870849 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k6zv"] Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.878734 4580 scope.go:117] "RemoveContainer" containerID="986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.891264 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-s5ztm"] Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.903781 4580 scope.go:117] "RemoveContainer" containerID="bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3" Jan 12 13:21:27 crc kubenswrapper[4580]: W0112 13:21:26.913368 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9391a2_339e_4eed_84df_164e7eae3e0c.slice/crio-fccaaecb811e33efc74d8df4a58794f8702b400645eaae6fd72ba961e01fe780 WatchSource:0}: Error finding container fccaaecb811e33efc74d8df4a58794f8702b400645eaae6fd72ba961e01fe780: Status 404 returned error can't find the container with id fccaaecb811e33efc74d8df4a58794f8702b400645eaae6fd72ba961e01fe780 Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.918187 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.924395 4580 scope.go:117] "RemoveContainer" containerID="241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d" Jan 12 13:21:27 crc kubenswrapper[4580]: E0112 13:21:26.925388 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d\": container with ID starting with 241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d not found: ID does not exist" containerID="241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.925420 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d"} err="failed to get container status \"241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d\": rpc error: code = NotFound desc = could not find container \"241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d\": container with ID starting with 241f08f82663f20651ffa14ad492dc2d8fab7f5c65ab7b29b93aba202c05450d not found: ID does not exist" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.925445 4580 scope.go:117] "RemoveContainer" containerID="986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b" Jan 12 13:21:27 crc kubenswrapper[4580]: E0112 13:21:26.928254 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b\": container with ID starting with 986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b not found: ID does not exist" containerID="986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.928293 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b"} err="failed to get container status \"986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b\": rpc error: code = NotFound desc = could not find container \"986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b\": container with ID starting with 986ef39f1f2f75bb0c1f35d644e3aac4488eb44ef2c5efb0d952fa1212b9a13b not found: ID does not exist" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.928324 4580 scope.go:117] "RemoveContainer" containerID="bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3" Jan 12 13:21:27 crc kubenswrapper[4580]: W0112 13:21:26.931501 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54193329_eff6_4ec8_874c_722003442682.slice/crio-f939dbb57c77b22656fec2efe06cf9dcdded5425dff637814248baa2e70e20a8 WatchSource:0}: Error finding container f939dbb57c77b22656fec2efe06cf9dcdded5425dff637814248baa2e70e20a8: Status 404 returned error can't find the container with id f939dbb57c77b22656fec2efe06cf9dcdded5425dff637814248baa2e70e20a8 Jan 12 13:21:27 crc kubenswrapper[4580]: E0112 13:21:26.932965 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3\": container with ID starting with bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3 not found: ID does not exist" containerID="bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:26.932995 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3"} err="failed to get container status \"bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3\": rpc error: code = NotFound desc = could not find container \"bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3\": container with ID starting with bb12cd1af856d673149198532f0eb7ad2fb9b59423fbb2904629c32224191ca3 not found: ID does not exist" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.108961 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.147732 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-swift-storage-0\") pod \"f480553f-1da5-4142-bb99-429c8cefc6de\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.148159 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-svc\") pod \"f480553f-1da5-4142-bb99-429c8cefc6de\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.148353 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-config\") pod \"f480553f-1da5-4142-bb99-429c8cefc6de\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.148440 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-nb\") pod \"f480553f-1da5-4142-bb99-429c8cefc6de\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.148490 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-sb\") pod \"f480553f-1da5-4142-bb99-429c8cefc6de\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.148535 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxsdk\" (UniqueName: \"kubernetes.io/projected/f480553f-1da5-4142-bb99-429c8cefc6de-kube-api-access-fxsdk\") pod \"f480553f-1da5-4142-bb99-429c8cefc6de\" (UID: \"f480553f-1da5-4142-bb99-429c8cefc6de\") " Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.156535 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f480553f-1da5-4142-bb99-429c8cefc6de-kube-api-access-fxsdk" (OuterVolumeSpecName: "kube-api-access-fxsdk") pod "f480553f-1da5-4142-bb99-429c8cefc6de" (UID: "f480553f-1da5-4142-bb99-429c8cefc6de"). InnerVolumeSpecName "kube-api-access-fxsdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.171643 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f480553f-1da5-4142-bb99-429c8cefc6de" (UID: "f480553f-1da5-4142-bb99-429c8cefc6de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.175095 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f480553f-1da5-4142-bb99-429c8cefc6de" (UID: "f480553f-1da5-4142-bb99-429c8cefc6de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.179484 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f480553f-1da5-4142-bb99-429c8cefc6de" (UID: "f480553f-1da5-4142-bb99-429c8cefc6de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.183266 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f480553f-1da5-4142-bb99-429c8cefc6de" (UID: "f480553f-1da5-4142-bb99-429c8cefc6de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.190143 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-config" (OuterVolumeSpecName: "config") pod "f480553f-1da5-4142-bb99-429c8cefc6de" (UID: "f480553f-1da5-4142-bb99-429c8cefc6de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.249771 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.249798 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.249809 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.249819 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxsdk\" (UniqueName: \"kubernetes.io/projected/f480553f-1da5-4142-bb99-429c8cefc6de-kube-api-access-fxsdk\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.249830 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.249841 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f480553f-1da5-4142-bb99-429c8cefc6de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.294188 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" path="/var/lib/kubelet/pods/701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a/volumes" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.824201 4580 generic.go:334] "Generic (PLEG): container finished" podID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerID="e334e4170a5a3971d5c5fba63a4bb0a5f8f99e5e8bdebee6a470cc548a571188" exitCode=0 Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.824257 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" event={"ID":"cd9391a2-339e-4eed-84df-164e7eae3e0c","Type":"ContainerDied","Data":"e334e4170a5a3971d5c5fba63a4bb0a5f8f99e5e8bdebee6a470cc548a571188"} Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.824300 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" event={"ID":"cd9391a2-339e-4eed-84df-164e7eae3e0c","Type":"ContainerStarted","Data":"fccaaecb811e33efc74d8df4a58794f8702b400645eaae6fd72ba961e01fe780"} Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.836873 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e9e0fa5-2756-4650-9199-7249ca8a1650","Type":"ContainerStarted","Data":"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2"} Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.837139 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e9e0fa5-2756-4650-9199-7249ca8a1650","Type":"ContainerStarted","Data":"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2"} Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.837232 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerName="glance-log" containerID="cri-o://6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2" gracePeriod=30 Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.837444 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerName="glance-httpd" containerID="cri-o://29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2" gracePeriod=30 Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.900830 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f4fd9547-lhpct" event={"ID":"fca18973-3724-49c5-b8c4-cf6beb66c288","Type":"ContainerStarted","Data":"223135d4ddfb62c157896dd77e0ec342c29da5eeb2ccf991089e63a95fc86227"} Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.908942 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"54193329-eff6-4ec8-874c-722003442682","Type":"ContainerStarted","Data":"6da1d2132a3b5bf5c9d1f024e44941b5292ddb852509277faaf801f42ab4220f"} Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.909006 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"54193329-eff6-4ec8-874c-722003442682","Type":"ContainerStarted","Data":"f939dbb57c77b22656fec2efe06cf9dcdded5425dff637814248baa2e70e20a8"} Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.915626 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.91561379 podStartE2EDuration="5.91561379s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:27.856293479 +0000 UTC m=+886.900512169" watchObservedRunningTime="2026-01-12 13:21:27.91561379 +0000 UTC m=+886.959832480" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.946973 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.947425 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-q6qcr" event={"ID":"f480553f-1da5-4142-bb99-429c8cefc6de","Type":"ContainerDied","Data":"fea6ad4b8a64eaa4683029c6d5107e353dcdbeeb879280a35adef178e2d51b58"} Jan 12 13:21:27 crc kubenswrapper[4580]: I0112 13:21:27.947456 4580 scope.go:117] "RemoveContainer" containerID="1dd525a229924e49bb8b3f3e5549a74c78e5d8369b4e2dab2c437a10dfcd0184" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.013159 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q6qcr"] Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.018189 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q6qcr"] Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.542299 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.679484 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-combined-ca-bundle\") pod \"2e9e0fa5-2756-4650-9199-7249ca8a1650\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.679766 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-config-data\") pod \"2e9e0fa5-2756-4650-9199-7249ca8a1650\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.679803 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-internal-tls-certs\") pod \"2e9e0fa5-2756-4650-9199-7249ca8a1650\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.679876 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-httpd-run\") pod \"2e9e0fa5-2756-4650-9199-7249ca8a1650\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.679964 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qcfl\" (UniqueName: \"kubernetes.io/projected/2e9e0fa5-2756-4650-9199-7249ca8a1650-kube-api-access-6qcfl\") pod \"2e9e0fa5-2756-4650-9199-7249ca8a1650\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.680031 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-scripts\") pod \"2e9e0fa5-2756-4650-9199-7249ca8a1650\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.680073 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-logs\") pod \"2e9e0fa5-2756-4650-9199-7249ca8a1650\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.680254 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2e9e0fa5-2756-4650-9199-7249ca8a1650\" (UID: \"2e9e0fa5-2756-4650-9199-7249ca8a1650\") " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.681692 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-logs" (OuterVolumeSpecName: "logs") pod "2e9e0fa5-2756-4650-9199-7249ca8a1650" (UID: "2e9e0fa5-2756-4650-9199-7249ca8a1650"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.681817 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2e9e0fa5-2756-4650-9199-7249ca8a1650" (UID: "2e9e0fa5-2756-4650-9199-7249ca8a1650"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.691274 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "2e9e0fa5-2756-4650-9199-7249ca8a1650" (UID: "2e9e0fa5-2756-4650-9199-7249ca8a1650"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.691308 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9e0fa5-2756-4650-9199-7249ca8a1650-kube-api-access-6qcfl" (OuterVolumeSpecName: "kube-api-access-6qcfl") pod "2e9e0fa5-2756-4650-9199-7249ca8a1650" (UID: "2e9e0fa5-2756-4650-9199-7249ca8a1650"). InnerVolumeSpecName "kube-api-access-6qcfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.694233 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-scripts" (OuterVolumeSpecName: "scripts") pod "2e9e0fa5-2756-4650-9199-7249ca8a1650" (UID: "2e9e0fa5-2756-4650-9199-7249ca8a1650"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.709071 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e9e0fa5-2756-4650-9199-7249ca8a1650" (UID: "2e9e0fa5-2756-4650-9199-7249ca8a1650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.731174 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2e9e0fa5-2756-4650-9199-7249ca8a1650" (UID: "2e9e0fa5-2756-4650-9199-7249ca8a1650"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.731975 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-config-data" (OuterVolumeSpecName: "config-data") pod "2e9e0fa5-2756-4650-9199-7249ca8a1650" (UID: "2e9e0fa5-2756-4650-9199-7249ca8a1650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.784824 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.784860 4580 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.784875 4580 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.784884 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qcfl\" (UniqueName: \"kubernetes.io/projected/2e9e0fa5-2756-4650-9199-7249ca8a1650-kube-api-access-6qcfl\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.784904 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.784914 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9e0fa5-2756-4650-9199-7249ca8a1650-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.784950 4580 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.784962 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9e0fa5-2756-4650-9199-7249ca8a1650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.801804 4580 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.887662 4580 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.962806 4580 generic.go:334] "Generic (PLEG): container finished" podID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerID="29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2" exitCode=143 Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.962841 4580 generic.go:334] "Generic (PLEG): container finished" podID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerID="6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2" exitCode=143 Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.962882 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e9e0fa5-2756-4650-9199-7249ca8a1650","Type":"ContainerDied","Data":"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2"} Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.962928 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e9e0fa5-2756-4650-9199-7249ca8a1650","Type":"ContainerDied","Data":"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2"} Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.962939 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e9e0fa5-2756-4650-9199-7249ca8a1650","Type":"ContainerDied","Data":"7d9ef3e35a2f06697b8c428fe22d8d54243ff0fc4a556fbcbfa17703089a0e6b"} Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.962956 4580 scope.go:117] "RemoveContainer" containerID="29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.963062 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.969245 4580 generic.go:334] "Generic (PLEG): container finished" podID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerID="d4c16316be0419bbe254426d82941cfa3523c3d37dfc7cae073ace47bf297952" exitCode=0 Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.969328 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtmvw" event={"ID":"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74","Type":"ContainerDied","Data":"d4c16316be0419bbe254426d82941cfa3523c3d37dfc7cae073ace47bf297952"} Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.973599 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"54193329-eff6-4ec8-874c-722003442682","Type":"ContainerStarted","Data":"95baeb6719488da43d38d7f7b507cee4381d1564f59052036fa580746fd5cfb9"} Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.973676 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="54193329-eff6-4ec8-874c-722003442682" containerName="glance-log" containerID="cri-o://6da1d2132a3b5bf5c9d1f024e44941b5292ddb852509277faaf801f42ab4220f" gracePeriod=30 Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.973723 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="54193329-eff6-4ec8-874c-722003442682" containerName="glance-httpd" containerID="cri-o://95baeb6719488da43d38d7f7b507cee4381d1564f59052036fa580746fd5cfb9" gracePeriod=30 Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.984257 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" event={"ID":"cd9391a2-339e-4eed-84df-164e7eae3e0c","Type":"ContainerStarted","Data":"84ee4eecce5f3fd953e6b9c0cdd378e9dcf259bd7fa86362302dbf0ac0c46779"} Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.984880 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:28 crc kubenswrapper[4580]: I0112 13:21:28.996620 4580 scope.go:117] "RemoveContainer" containerID="6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.013751 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.013736664 podStartE2EDuration="7.013736664s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:29.009004223 +0000 UTC m=+888.053222913" watchObservedRunningTime="2026-01-12 13:21:29.013736664 +0000 UTC m=+888.057955353" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.019088 4580 scope.go:117] "RemoveContainer" containerID="29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2" Jan 12 13:21:29 crc kubenswrapper[4580]: E0112 13:21:29.019490 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2\": container with ID starting with 29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2 not found: ID does not exist" containerID="29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.019521 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2"} err="failed to get container status \"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2\": rpc error: code = NotFound desc = could not find container \"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2\": container with ID starting with 29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2 not found: ID does not exist" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.019542 4580 scope.go:117] "RemoveContainer" containerID="6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2" Jan 12 13:21:29 crc kubenswrapper[4580]: E0112 13:21:29.019836 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2\": container with ID starting with 6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2 not found: ID does not exist" containerID="6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.019856 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2"} err="failed to get container status \"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2\": rpc error: code = NotFound desc = could not find container \"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2\": container with ID starting with 6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2 not found: ID does not exist" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.019868 4580 scope.go:117] "RemoveContainer" containerID="29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.020245 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2"} err="failed to get container status \"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2\": rpc error: code = NotFound desc = could not find container \"29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2\": container with ID starting with 29dc2b83b7b222817561a20a3768ce53fcd24403761e91f75faa39c4ebff90f2 not found: ID does not exist" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.020264 4580 scope.go:117] "RemoveContainer" containerID="6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.020478 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2"} err="failed to get container status \"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2\": rpc error: code = NotFound desc = could not find container \"6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2\": container with ID starting with 6d65a39297ef8f4d7bbac06d651cc1fda74d4443ce4a1dd63a966d70ea8c29b2 not found: ID does not exist" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.034943 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" podStartSLOduration=7.034911991 podStartE2EDuration="7.034911991s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:29.033521877 +0000 UTC m=+888.077740566" watchObservedRunningTime="2026-01-12 13:21:29.034911991 +0000 UTC m=+888.079130681" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.058598 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.074043 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.080143 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:29 crc kubenswrapper[4580]: E0112 13:21:29.080736 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f480553f-1da5-4142-bb99-429c8cefc6de" containerName="init" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.080771 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f480553f-1da5-4142-bb99-429c8cefc6de" containerName="init" Jan 12 13:21:29 crc kubenswrapper[4580]: E0112 13:21:29.080785 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerName="glance-httpd" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.080792 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerName="glance-httpd" Jan 12 13:21:29 crc kubenswrapper[4580]: E0112 13:21:29.080852 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerName="extract-content" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.080859 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerName="extract-content" Jan 12 13:21:29 crc kubenswrapper[4580]: E0112 13:21:29.080871 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerName="extract-utilities" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.080877 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerName="extract-utilities" Jan 12 13:21:29 crc kubenswrapper[4580]: E0112 13:21:29.080911 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerName="glance-log" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.080916 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerName="glance-log" Jan 12 13:21:29 crc kubenswrapper[4580]: E0112 13:21:29.080925 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerName="registry-server" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.080930 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerName="registry-server" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.081097 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="701cbadd-e4f4-4c1d-bd56-e51ed0e75b8a" containerName="registry-server" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.081125 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f480553f-1da5-4142-bb99-429c8cefc6de" containerName="init" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.081132 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerName="glance-httpd" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.081149 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" containerName="glance-log" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.082111 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.084188 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.085077 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.087220 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.192302 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.192439 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.192596 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgmsn\" (UniqueName: \"kubernetes.io/projected/2a68b6dc-1793-49ee-b68a-ded144ce21d9-kube-api-access-jgmsn\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.192646 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.192713 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-logs\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.192739 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.192786 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.192838 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295072 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgmsn\" (UniqueName: \"kubernetes.io/projected/2a68b6dc-1793-49ee-b68a-ded144ce21d9-kube-api-access-jgmsn\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295131 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295161 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-logs\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295182 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295206 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295242 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295291 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295323 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295811 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.295902 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-logs\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.296161 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.299625 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.303855 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9e0fa5-2756-4650-9199-7249ca8a1650" path="/var/lib/kubelet/pods/2e9e0fa5-2756-4650-9199-7249ca8a1650/volumes" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.303871 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.304645 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f480553f-1da5-4142-bb99-429c8cefc6de" path="/var/lib/kubelet/pods/f480553f-1da5-4142-bb99-429c8cefc6de/volumes" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.305003 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.305837 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.309798 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgmsn\" (UniqueName: \"kubernetes.io/projected/2a68b6dc-1793-49ee-b68a-ded144ce21d9-kube-api-access-jgmsn\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.316373 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.397154 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.995388 4580 generic.go:334] "Generic (PLEG): container finished" podID="54193329-eff6-4ec8-874c-722003442682" containerID="95baeb6719488da43d38d7f7b507cee4381d1564f59052036fa580746fd5cfb9" exitCode=0 Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.995426 4580 generic.go:334] "Generic (PLEG): container finished" podID="54193329-eff6-4ec8-874c-722003442682" containerID="6da1d2132a3b5bf5c9d1f024e44941b5292ddb852509277faaf801f42ab4220f" exitCode=143 Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.995566 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"54193329-eff6-4ec8-874c-722003442682","Type":"ContainerDied","Data":"95baeb6719488da43d38d7f7b507cee4381d1564f59052036fa580746fd5cfb9"} Jan 12 13:21:29 crc kubenswrapper[4580]: I0112 13:21:29.995594 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"54193329-eff6-4ec8-874c-722003442682","Type":"ContainerDied","Data":"6da1d2132a3b5bf5c9d1f024e44941b5292ddb852509277faaf801f42ab4220f"} Jan 12 13:21:30 crc kubenswrapper[4580]: I0112 13:21:30.010237 4580 generic.go:334] "Generic (PLEG): container finished" podID="a838d8bb-d607-4309-a666-8da1387631fc" containerID="74c1e9a109071b596d005186baf53e456b93891ceb51948650dc0ba9c9fdd577" exitCode=0 Jan 12 13:21:30 crc kubenswrapper[4580]: I0112 13:21:30.010353 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hcdf4" event={"ID":"a838d8bb-d607-4309-a666-8da1387631fc","Type":"ContainerDied","Data":"74c1e9a109071b596d005186baf53e456b93891ceb51948650dc0ba9c9fdd577"} Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.168458 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bb57d5f45-mb7xb"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.221092 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54b765ff94-66rkz"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.222308 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.224370 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.232755 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54b765ff94-66rkz"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.295179 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74f4fd9547-lhpct"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.304737 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.315861 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8699b457dd-z2fkt"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.317296 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.334261 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8699b457dd-z2fkt"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.342529 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-secret-key\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.342569 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-config-data\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.342594 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-scripts\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.342628 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-tls-certs\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.342646 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-logs\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.342679 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-combined-ca-bundle\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.342713 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r95ks\" (UniqueName: \"kubernetes.io/projected/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-kube-api-access-r95ks\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.368743 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cj96v"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.370217 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444051 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-tls-certs\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444123 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-logs\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444156 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-horizon-tls-certs\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444202 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92d059e4-ff2b-4ecc-ae14-6367d54e720f-config-data\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444232 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-combined-ca-bundle\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444256 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92d059e4-ff2b-4ecc-ae14-6367d54e720f-scripts\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444307 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r95ks\" (UniqueName: \"kubernetes.io/projected/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-kube-api-access-r95ks\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444331 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-combined-ca-bundle\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444399 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdxc\" (UniqueName: \"kubernetes.io/projected/92d059e4-ff2b-4ecc-ae14-6367d54e720f-kube-api-access-8fdxc\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444420 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-secret-key\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444442 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d059e4-ff2b-4ecc-ae14-6367d54e720f-logs\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444472 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-config-data\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444499 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-scripts\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.444517 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-horizon-secret-key\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.451036 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-secret-key\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.451578 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-scripts\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.451650 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-tls-certs\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.451663 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-logs\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.452003 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-combined-ca-bundle\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.459520 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-config-data\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.472319 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r95ks\" (UniqueName: \"kubernetes.io/projected/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-kube-api-access-r95ks\") pod \"horizon-54b765ff94-66rkz\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.480349 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cj96v"] Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.542129 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546002 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-catalog-content\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546211 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fdxc\" (UniqueName: \"kubernetes.io/projected/92d059e4-ff2b-4ecc-ae14-6367d54e720f-kube-api-access-8fdxc\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546289 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d059e4-ff2b-4ecc-ae14-6367d54e720f-logs\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546365 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-utilities\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546450 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-horizon-secret-key\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546565 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-horizon-tls-certs\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546624 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl5p7\" (UniqueName: \"kubernetes.io/projected/11060a9d-34a1-4ac1-baa8-a478351504f3-kube-api-access-dl5p7\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546705 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92d059e4-ff2b-4ecc-ae14-6367d54e720f-config-data\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546788 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92d059e4-ff2b-4ecc-ae14-6367d54e720f-scripts\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546857 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d059e4-ff2b-4ecc-ae14-6367d54e720f-logs\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.546957 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-combined-ca-bundle\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.547870 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92d059e4-ff2b-4ecc-ae14-6367d54e720f-scripts\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.550921 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-horizon-secret-key\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.551401 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-combined-ca-bundle\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.552823 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92d059e4-ff2b-4ecc-ae14-6367d54e720f-config-data\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.553383 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d059e4-ff2b-4ecc-ae14-6367d54e720f-horizon-tls-certs\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.566968 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fdxc\" (UniqueName: \"kubernetes.io/projected/92d059e4-ff2b-4ecc-ae14-6367d54e720f-kube-api-access-8fdxc\") pod \"horizon-8699b457dd-z2fkt\" (UID: \"92d059e4-ff2b-4ecc-ae14-6367d54e720f\") " pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.639507 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.648071 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-catalog-content\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.648136 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-utilities\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.648205 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl5p7\" (UniqueName: \"kubernetes.io/projected/11060a9d-34a1-4ac1-baa8-a478351504f3-kube-api-access-dl5p7\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.648467 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-catalog-content\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.648639 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-utilities\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.662360 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl5p7\" (UniqueName: \"kubernetes.io/projected/11060a9d-34a1-4ac1-baa8-a478351504f3-kube-api-access-dl5p7\") pod \"certified-operators-cj96v\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:31 crc kubenswrapper[4580]: I0112 13:21:31.686014 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:21:33 crc kubenswrapper[4580]: I0112 13:21:33.193275 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:21:33 crc kubenswrapper[4580]: I0112 13:21:33.256705 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-2f659"] Jan 12 13:21:33 crc kubenswrapper[4580]: I0112 13:21:33.256955 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="dnsmasq-dns" containerID="cri-o://5b500add603a491beede5a7374e71500271e26875c3dfe833b2695a1c0962fcf" gracePeriod=10 Jan 12 13:21:34 crc kubenswrapper[4580]: I0112 13:21:34.052230 4580 generic.go:334] "Generic (PLEG): container finished" podID="15e17da6-3f67-4b22-8c60-02b342fece99" containerID="5b500add603a491beede5a7374e71500271e26875c3dfe833b2695a1c0962fcf" exitCode=0 Jan 12 13:21:34 crc kubenswrapper[4580]: I0112 13:21:34.052301 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" event={"ID":"15e17da6-3f67-4b22-8c60-02b342fece99","Type":"ContainerDied","Data":"5b500add603a491beede5a7374e71500271e26875c3dfe833b2695a1c0962fcf"} Jan 12 13:21:34 crc kubenswrapper[4580]: I0112 13:21:34.053670 4580 generic.go:334] "Generic (PLEG): container finished" podID="eb0ed855-adbc-497b-9bc5-92330edbb8c8" containerID="766a0e0bcaa4a21deed755b346bffab06e1b15b9f8f13b5b1b9af81e66d8e506" exitCode=0 Jan 12 13:21:34 crc kubenswrapper[4580]: I0112 13:21:34.053698 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gdxqz" event={"ID":"eb0ed855-adbc-497b-9bc5-92330edbb8c8","Type":"ContainerDied","Data":"766a0e0bcaa4a21deed755b346bffab06e1b15b9f8f13b5b1b9af81e66d8e506"} Jan 12 13:21:36 crc kubenswrapper[4580]: I0112 13:21:36.267955 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: connect: connection refused" Jan 12 13:21:38 crc kubenswrapper[4580]: I0112 13:21:38.895536 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.090849 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bqsx\" (UniqueName: \"kubernetes.io/projected/a838d8bb-d607-4309-a666-8da1387631fc-kube-api-access-9bqsx\") pod \"a838d8bb-d607-4309-a666-8da1387631fc\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.091218 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-config-data\") pod \"a838d8bb-d607-4309-a666-8da1387631fc\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.091330 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-combined-ca-bundle\") pod \"a838d8bb-d607-4309-a666-8da1387631fc\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.091386 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-credential-keys\") pod \"a838d8bb-d607-4309-a666-8da1387631fc\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.091439 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-fernet-keys\") pod \"a838d8bb-d607-4309-a666-8da1387631fc\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.091478 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-scripts\") pod \"a838d8bb-d607-4309-a666-8da1387631fc\" (UID: \"a838d8bb-d607-4309-a666-8da1387631fc\") " Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.098411 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-scripts" (OuterVolumeSpecName: "scripts") pod "a838d8bb-d607-4309-a666-8da1387631fc" (UID: "a838d8bb-d607-4309-a666-8da1387631fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.099273 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a838d8bb-d607-4309-a666-8da1387631fc" (UID: "a838d8bb-d607-4309-a666-8da1387631fc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.100398 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a838d8bb-d607-4309-a666-8da1387631fc-kube-api-access-9bqsx" (OuterVolumeSpecName: "kube-api-access-9bqsx") pod "a838d8bb-d607-4309-a666-8da1387631fc" (UID: "a838d8bb-d607-4309-a666-8da1387631fc"). InnerVolumeSpecName "kube-api-access-9bqsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.110592 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a838d8bb-d607-4309-a666-8da1387631fc" (UID: "a838d8bb-d607-4309-a666-8da1387631fc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.120286 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hcdf4" event={"ID":"a838d8bb-d607-4309-a666-8da1387631fc","Type":"ContainerDied","Data":"a984ef6d9319849bc0b8182a27d40cf5f9d1de49e4b0d1868e660d6412fd6006"} Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.120331 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a984ef6d9319849bc0b8182a27d40cf5f9d1de49e4b0d1868e660d6412fd6006" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.120391 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hcdf4" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.121588 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a838d8bb-d607-4309-a666-8da1387631fc" (UID: "a838d8bb-d607-4309-a666-8da1387631fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.127387 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-config-data" (OuterVolumeSpecName: "config-data") pod "a838d8bb-d607-4309-a666-8da1387631fc" (UID: "a838d8bb-d607-4309-a666-8da1387631fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.194922 4580 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.194970 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.194982 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bqsx\" (UniqueName: \"kubernetes.io/projected/a838d8bb-d607-4309-a666-8da1387631fc-kube-api-access-9bqsx\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.194994 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.195028 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.195037 4580 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a838d8bb-d607-4309-a666-8da1387631fc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.976349 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hcdf4"] Jan 12 13:21:39 crc kubenswrapper[4580]: I0112 13:21:39.984369 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hcdf4"] Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.064578 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8lnk9"] Jan 12 13:21:40 crc kubenswrapper[4580]: E0112 13:21:40.065126 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a838d8bb-d607-4309-a666-8da1387631fc" containerName="keystone-bootstrap" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.065148 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="a838d8bb-d607-4309-a666-8da1387631fc" containerName="keystone-bootstrap" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.065377 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="a838d8bb-d607-4309-a666-8da1387631fc" containerName="keystone-bootstrap" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.066176 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.068603 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.068739 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-64jgc" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.068778 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.068915 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.070155 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.081373 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8lnk9"] Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.140635 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-credential-keys\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.140698 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-scripts\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.140722 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-fernet-keys\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.140743 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bdq\" (UniqueName: \"kubernetes.io/projected/7d306f46-ea22-4b07-a18c-5134b125fa49-kube-api-access-v7bdq\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.140765 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-combined-ca-bundle\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.140936 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-config-data\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.241761 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-credential-keys\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.241826 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-scripts\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.241854 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-fernet-keys\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.241877 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bdq\" (UniqueName: \"kubernetes.io/projected/7d306f46-ea22-4b07-a18c-5134b125fa49-kube-api-access-v7bdq\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.241909 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-combined-ca-bundle\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.241962 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-config-data\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.245442 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-fernet-keys\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.245469 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-combined-ca-bundle\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.245895 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-scripts\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.246060 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-config-data\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.246182 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-credential-keys\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.255231 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bdq\" (UniqueName: \"kubernetes.io/projected/7d306f46-ea22-4b07-a18c-5134b125fa49-kube-api-access-v7bdq\") pod \"keystone-bootstrap-8lnk9\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:40 crc kubenswrapper[4580]: I0112 13:21:40.388120 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:21:41 crc kubenswrapper[4580]: I0112 13:21:41.295468 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a838d8bb-d607-4309-a666-8da1387631fc" path="/var/lib/kubelet/pods/a838d8bb-d607-4309-a666-8da1387631fc/volumes" Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.269073 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: i/o timeout" Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.524943 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.650125 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j5n4\" (UniqueName: \"kubernetes.io/projected/eb0ed855-adbc-497b-9bc5-92330edbb8c8-kube-api-access-7j5n4\") pod \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.650346 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-combined-ca-bundle\") pod \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.650400 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-config\") pod \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\" (UID: \"eb0ed855-adbc-497b-9bc5-92330edbb8c8\") " Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.655162 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0ed855-adbc-497b-9bc5-92330edbb8c8-kube-api-access-7j5n4" (OuterVolumeSpecName: "kube-api-access-7j5n4") pod "eb0ed855-adbc-497b-9bc5-92330edbb8c8" (UID: "eb0ed855-adbc-497b-9bc5-92330edbb8c8"). InnerVolumeSpecName "kube-api-access-7j5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.670949 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-config" (OuterVolumeSpecName: "config") pod "eb0ed855-adbc-497b-9bc5-92330edbb8c8" (UID: "eb0ed855-adbc-497b-9bc5-92330edbb8c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.673678 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb0ed855-adbc-497b-9bc5-92330edbb8c8" (UID: "eb0ed855-adbc-497b-9bc5-92330edbb8c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:46 crc kubenswrapper[4580]: E0112 13:21:46.748024 4580 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 12 13:21:46 crc kubenswrapper[4580]: E0112 13:21:46.748209 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56fh556h67fhd9hd8h687h66fh694hc7h5cch56h5b9h66fh586h5d9h5d4h55fh66h5d4h598h589h67fh5cdh79h64dhch679h68dh557hfbh5d8hd9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmcxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(84c1ab4b-8921-4f4a-88dd-adf6e224d62c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.752843 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j5n4\" (UniqueName: \"kubernetes.io/projected/eb0ed855-adbc-497b-9bc5-92330edbb8c8-kube-api-access-7j5n4\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.752877 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:46 crc kubenswrapper[4580]: I0112 13:21:46.752887 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb0ed855-adbc-497b-9bc5-92330edbb8c8-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.162944 4580 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.163133 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-blrls_openstack(3cbf5c7d-9220-43a8-9015-1c52d0c3855f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.164206 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-blrls" podUID="3cbf5c7d-9220-43a8-9015-1c52d0c3855f" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.192617 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gdxqz" event={"ID":"eb0ed855-adbc-497b-9bc5-92330edbb8c8","Type":"ContainerDied","Data":"0ae8fb3c4b1ac8d30be940f0b0bc38c1e4e54fe631d42101a4a9ad03c49d1daf"} Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.192667 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gdxqz" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.192694 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ae8fb3c4b1ac8d30be940f0b0bc38c1e4e54fe631d42101a4a9ad03c49d1daf" Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.195241 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-blrls" podUID="3cbf5c7d-9220-43a8-9015-1c52d0c3855f" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.248265 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.263744 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.268461 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-sb\") pod \"15e17da6-3f67-4b22-8c60-02b342fece99\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.268585 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-svc\") pod \"15e17da6-3f67-4b22-8c60-02b342fece99\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.268665 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-nb\") pod \"15e17da6-3f67-4b22-8c60-02b342fece99\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.268738 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6ssj\" (UniqueName: \"kubernetes.io/projected/15e17da6-3f67-4b22-8c60-02b342fece99-kube-api-access-w6ssj\") pod \"15e17da6-3f67-4b22-8c60-02b342fece99\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.268789 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-swift-storage-0\") pod \"15e17da6-3f67-4b22-8c60-02b342fece99\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.268839 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-config\") pod \"15e17da6-3f67-4b22-8c60-02b342fece99\" (UID: \"15e17da6-3f67-4b22-8c60-02b342fece99\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.278440 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e17da6-3f67-4b22-8c60-02b342fece99-kube-api-access-w6ssj" (OuterVolumeSpecName: "kube-api-access-w6ssj") pod "15e17da6-3f67-4b22-8c60-02b342fece99" (UID: "15e17da6-3f67-4b22-8c60-02b342fece99"). InnerVolumeSpecName "kube-api-access-w6ssj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.311713 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "15e17da6-3f67-4b22-8c60-02b342fece99" (UID: "15e17da6-3f67-4b22-8c60-02b342fece99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.319738 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15e17da6-3f67-4b22-8c60-02b342fece99" (UID: "15e17da6-3f67-4b22-8c60-02b342fece99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.336835 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "15e17da6-3f67-4b22-8c60-02b342fece99" (UID: "15e17da6-3f67-4b22-8c60-02b342fece99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.344600 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15e17da6-3f67-4b22-8c60-02b342fece99" (UID: "15e17da6-3f67-4b22-8c60-02b342fece99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.358404 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-config" (OuterVolumeSpecName: "config") pod "15e17da6-3f67-4b22-8c60-02b342fece99" (UID: "15e17da6-3f67-4b22-8c60-02b342fece99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370082 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-scripts\") pod \"54193329-eff6-4ec8-874c-722003442682\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370138 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-combined-ca-bundle\") pod \"54193329-eff6-4ec8-874c-722003442682\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370170 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-httpd-run\") pod \"54193329-eff6-4ec8-874c-722003442682\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370224 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"54193329-eff6-4ec8-874c-722003442682\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370274 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-logs\") pod \"54193329-eff6-4ec8-874c-722003442682\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370371 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-config-data\") pod \"54193329-eff6-4ec8-874c-722003442682\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370421 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxf4t\" (UniqueName: \"kubernetes.io/projected/54193329-eff6-4ec8-874c-722003442682-kube-api-access-jxf4t\") pod \"54193329-eff6-4ec8-874c-722003442682\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370460 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-public-tls-certs\") pod \"54193329-eff6-4ec8-874c-722003442682\" (UID: \"54193329-eff6-4ec8-874c-722003442682\") " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.370635 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "54193329-eff6-4ec8-874c-722003442682" (UID: "54193329-eff6-4ec8-874c-722003442682"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.371080 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.371116 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.371127 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.371135 4580 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.371143 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.371154 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6ssj\" (UniqueName: \"kubernetes.io/projected/15e17da6-3f67-4b22-8c60-02b342fece99-kube-api-access-w6ssj\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.371164 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15e17da6-3f67-4b22-8c60-02b342fece99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.371279 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-logs" (OuterVolumeSpecName: "logs") pod "54193329-eff6-4ec8-874c-722003442682" (UID: "54193329-eff6-4ec8-874c-722003442682"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.376195 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "54193329-eff6-4ec8-874c-722003442682" (UID: "54193329-eff6-4ec8-874c-722003442682"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.377612 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54193329-eff6-4ec8-874c-722003442682-kube-api-access-jxf4t" (OuterVolumeSpecName: "kube-api-access-jxf4t") pod "54193329-eff6-4ec8-874c-722003442682" (UID: "54193329-eff6-4ec8-874c-722003442682"). InnerVolumeSpecName "kube-api-access-jxf4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.378020 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-scripts" (OuterVolumeSpecName: "scripts") pod "54193329-eff6-4ec8-874c-722003442682" (UID: "54193329-eff6-4ec8-874c-722003442682"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.390211 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54193329-eff6-4ec8-874c-722003442682" (UID: "54193329-eff6-4ec8-874c-722003442682"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.412661 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "54193329-eff6-4ec8-874c-722003442682" (UID: "54193329-eff6-4ec8-874c-722003442682"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.419256 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-config-data" (OuterVolumeSpecName: "config-data") pod "54193329-eff6-4ec8-874c-722003442682" (UID: "54193329-eff6-4ec8-874c-722003442682"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.473898 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.473994 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxf4t\" (UniqueName: \"kubernetes.io/projected/54193329-eff6-4ec8-874c-722003442682-kube-api-access-jxf4t\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.474008 4580 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.474017 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.474026 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54193329-eff6-4ec8-874c-722003442682-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.474339 4580 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.474388 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54193329-eff6-4ec8-874c-722003442682-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.487507 4580 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.575899 4580 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673276 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8tzg"] Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.673643 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0ed855-adbc-497b-9bc5-92330edbb8c8" containerName="neutron-db-sync" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673663 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0ed855-adbc-497b-9bc5-92330edbb8c8" containerName="neutron-db-sync" Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.673675 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="init" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673682 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="init" Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.673704 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54193329-eff6-4ec8-874c-722003442682" containerName="glance-httpd" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673710 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="54193329-eff6-4ec8-874c-722003442682" containerName="glance-httpd" Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.673734 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="dnsmasq-dns" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673740 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="dnsmasq-dns" Jan 12 13:21:47 crc kubenswrapper[4580]: E0112 13:21:47.673748 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54193329-eff6-4ec8-874c-722003442682" containerName="glance-log" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673756 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="54193329-eff6-4ec8-874c-722003442682" containerName="glance-log" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673901 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0ed855-adbc-497b-9bc5-92330edbb8c8" containerName="neutron-db-sync" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673916 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="54193329-eff6-4ec8-874c-722003442682" containerName="glance-log" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673929 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="dnsmasq-dns" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.673948 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="54193329-eff6-4ec8-874c-722003442682" containerName="glance-httpd" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.674820 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.678086 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.678187 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqmkl\" (UniqueName: \"kubernetes.io/projected/0ab93f8e-1504-47d9-af38-197cbcc54feb-kube-api-access-dqmkl\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.678223 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.678260 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.678283 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-svc\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.678391 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-config\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.692119 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8tzg"] Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.759065 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.780661 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.780735 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqmkl\" (UniqueName: \"kubernetes.io/projected/0ab93f8e-1504-47d9-af38-197cbcc54feb-kube-api-access-dqmkl\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.780775 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.780813 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.780838 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-svc\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.780855 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-config\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.781719 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-config\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.782855 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.783481 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.784009 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.787726 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-svc\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.806961 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqmkl\" (UniqueName: \"kubernetes.io/projected/0ab93f8e-1504-47d9-af38-197cbcc54feb-kube-api-access-dqmkl\") pod \"dnsmasq-dns-685444497c-q8tzg\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.881664 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-596bbb8b6-5jfvl"] Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.882954 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.885685 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.886160 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-config\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.886211 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-httpd-config\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.886253 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvb7w\" (UniqueName: \"kubernetes.io/projected/5b622df8-141e-468d-8f8d-86622f286566-kube-api-access-qvb7w\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.886275 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-ovndb-tls-certs\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.886306 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-combined-ca-bundle\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.893023 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4tjb6" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.893280 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.893598 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.898069 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-596bbb8b6-5jfvl"] Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.991261 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvb7w\" (UniqueName: \"kubernetes.io/projected/5b622df8-141e-468d-8f8d-86622f286566-kube-api-access-qvb7w\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.991316 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-ovndb-tls-certs\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.991410 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-combined-ca-bundle\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.993282 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-config\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.993333 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-httpd-config\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.997259 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-httpd-config\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.997542 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-config\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.998122 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-ovndb-tls-certs\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:47 crc kubenswrapper[4580]: I0112 13:21:47.998346 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-combined-ca-bundle\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.000052 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.009082 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvb7w\" (UniqueName: \"kubernetes.io/projected/5b622df8-141e-468d-8f8d-86622f286566-kube-api-access-qvb7w\") pod \"neutron-596bbb8b6-5jfvl\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.203340 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" event={"ID":"15e17da6-3f67-4b22-8c60-02b342fece99","Type":"ContainerDied","Data":"147aaa3f385682ee8ec10ecbbf8ada52bfd76cfd71813de666a005b632aee128"} Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.203381 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.203443 4580 scope.go:117] "RemoveContainer" containerID="5b500add603a491beede5a7374e71500271e26875c3dfe833b2695a1c0962fcf" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.207199 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"54193329-eff6-4ec8-874c-722003442682","Type":"ContainerDied","Data":"f939dbb57c77b22656fec2efe06cf9dcdded5425dff637814248baa2e70e20a8"} Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.207448 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.233586 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-2f659"] Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.237688 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.244267 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-2f659"] Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.252401 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.257943 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.274980 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.277203 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.285332 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.285898 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.286220 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.405489 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.405622 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.405712 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.405784 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8j6z\" (UniqueName: \"kubernetes.io/projected/cc934c6e-8cf2-42f0-97bc-22537818cd51-kube-api-access-f8j6z\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.405837 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.405956 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.405998 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-logs\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.406019 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.507780 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.507833 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.507871 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.507916 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8j6z\" (UniqueName: \"kubernetes.io/projected/cc934c6e-8cf2-42f0-97bc-22537818cd51-kube-api-access-f8j6z\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.507955 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.507999 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.508021 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-logs\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.508036 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.508677 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.508784 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.508949 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-logs\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.512375 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.513603 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.515120 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.515672 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.523449 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8j6z\" (UniqueName: \"kubernetes.io/projected/cc934c6e-8cf2-42f0-97bc-22537818cd51-kube-api-access-f8j6z\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.531812 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.605414 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:21:48 crc kubenswrapper[4580]: W0112 13:21:48.732461 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a68b6dc_1793_49ee_b68a_ded144ce21d9.slice/crio-693ff1c56c1740c2f6e7baa61c4fd806bd378524f7c1b0815c148857568f1f28 WatchSource:0}: Error finding container 693ff1c56c1740c2f6e7baa61c4fd806bd378524f7c1b0815c148857568f1f28: Status 404 returned error can't find the container with id 693ff1c56c1740c2f6e7baa61c4fd806bd378524f7c1b0815c148857568f1f28 Jan 12 13:21:48 crc kubenswrapper[4580]: E0112 13:21:48.743254 4580 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 12 13:21:48 crc kubenswrapper[4580]: E0112 13:21:48.743390 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zx8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-mmlfs_openstack(702612c1-966a-4293-b0dc-05901a325794): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 12 13:21:48 crc kubenswrapper[4580]: E0112 13:21:48.744556 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-mmlfs" podUID="702612c1-966a-4293-b0dc-05901a325794" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.751879 4580 scope.go:117] "RemoveContainer" containerID="f4995f0142e35d309436d0daf5bb31ac0a9ef9420cea92ec5a5b2a139d14dc4c" Jan 12 13:21:48 crc kubenswrapper[4580]: I0112 13:21:48.890418 4580 scope.go:117] "RemoveContainer" containerID="95baeb6719488da43d38d7f7b507cee4381d1564f59052036fa580746fd5cfb9" Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.046604 4580 scope.go:117] "RemoveContainer" containerID="6da1d2132a3b5bf5c9d1f024e44941b5292ddb852509277faaf801f42ab4220f" Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.221745 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cj96v"] Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.228074 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a68b6dc-1793-49ee-b68a-ded144ce21d9","Type":"ContainerStarted","Data":"693ff1c56c1740c2f6e7baa61c4fd806bd378524f7c1b0815c148857568f1f28"} Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.230991 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8699b457dd-z2fkt"] Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.234836 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtmvw" event={"ID":"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74","Type":"ContainerStarted","Data":"84b2273d9e2a668302d2b30f68341de15e75b7c74fba0cc4eb8e482c80b94806"} Jan 12 13:21:49 crc kubenswrapper[4580]: W0112 13:21:49.257838 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11060a9d_34a1_4ac1_baa8_a478351504f3.slice/crio-d1e56f116e8efc9b7ddc72285a618f56bcd9f2b98d52ce76fcb37570d6f6f4bf WatchSource:0}: Error finding container d1e56f116e8efc9b7ddc72285a618f56bcd9f2b98d52ce76fcb37570d6f6f4bf: Status 404 returned error can't find the container with id d1e56f116e8efc9b7ddc72285a618f56bcd9f2b98d52ce76fcb37570d6f6f4bf Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.266427 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jtmvw" podStartSLOduration=5.262226138 podStartE2EDuration="27.266407021s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="2026-01-12 13:21:26.777052459 +0000 UTC m=+885.821271149" lastFinishedPulling="2026-01-12 13:21:48.781233342 +0000 UTC m=+907.825452032" observedRunningTime="2026-01-12 13:21:49.256443041 +0000 UTC m=+908.300661731" watchObservedRunningTime="2026-01-12 13:21:49.266407021 +0000 UTC m=+908.310625712" Jan 12 13:21:49 crc kubenswrapper[4580]: E0112 13:21:49.288844 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-mmlfs" podUID="702612c1-966a-4293-b0dc-05901a325794" Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.328489 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" path="/var/lib/kubelet/pods/15e17da6-3f67-4b22-8c60-02b342fece99/volumes" Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.329659 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54193329-eff6-4ec8-874c-722003442682" path="/var/lib/kubelet/pods/54193329-eff6-4ec8-874c-722003442682/volumes" Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.365552 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8lnk9"] Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.377377 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54b765ff94-66rkz"] Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.396126 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.566698 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8tzg"] Jan 12 13:21:49 crc kubenswrapper[4580]: W0112 13:21:49.578807 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab93f8e_1504_47d9_af38_197cbcc54feb.slice/crio-c59dd887a0142c00616d2cc514e1d7393d73aeefaea776d73f2c7a09f5026649 WatchSource:0}: Error finding container c59dd887a0142c00616d2cc514e1d7393d73aeefaea776d73f2c7a09f5026649: Status 404 returned error can't find the container with id c59dd887a0142c00616d2cc514e1d7393d73aeefaea776d73f2c7a09f5026649 Jan 12 13:21:49 crc kubenswrapper[4580]: I0112 13:21:49.649861 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-596bbb8b6-5jfvl"] Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.189612 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c66d9fb7c-tgbgc"] Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.207904 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.212628 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.212917 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.232191 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c66d9fb7c-tgbgc"] Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.256129 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfb8x\" (UniqueName: \"kubernetes.io/projected/d56cc382-ea8e-4cea-829a-80335a2b71c9-kube-api-access-hfb8x\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.256173 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-config\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.256205 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-public-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.256288 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-httpd-config\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.256321 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-internal-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.256415 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-combined-ca-bundle\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.257671 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-ovndb-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.314009 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596bbb8b6-5jfvl" event={"ID":"5b622df8-141e-468d-8f8d-86622f286566","Type":"ContainerStarted","Data":"9446eeab8602add5285541e110e9ad9ad5950a9a181dfdf72a56e834a1cb735a"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.316398 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bb57d5f45-mb7xb" event={"ID":"82d9d66d-ff92-4164-96a9-c82a919cce00","Type":"ContainerStarted","Data":"4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.317727 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b765ff94-66rkz" event={"ID":"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2","Type":"ContainerStarted","Data":"97250f27354293c75ca6f9dd47aebe6786d6324879bc680e419e57c35f618d5f"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.319409 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8699b457dd-z2fkt" event={"ID":"92d059e4-ff2b-4ecc-ae14-6367d54e720f","Type":"ContainerStarted","Data":"2ab89421eb4b62d6582d7caff4695c0615165e0023484167b96f5e3f8a849bb6"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.319430 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8699b457dd-z2fkt" event={"ID":"92d059e4-ff2b-4ecc-ae14-6367d54e720f","Type":"ContainerStarted","Data":"b9d7775fae93d495ef2f625a342c2e21043980a9893bda20c93591e655528e6d"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.320369 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s9pd9" event={"ID":"edd41e34-6733-4a77-b99b-3ab0895b124a","Type":"ContainerStarted","Data":"cfe69420301424bb8260648e61122d02674c853997aaf692f028749778fb265c"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.323957 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f4fd9547-lhpct" event={"ID":"fca18973-3724-49c5-b8c4-cf6beb66c288","Type":"ContainerStarted","Data":"9b2735e034dc026b388dcd0a7bf7297e976eda33e67baeccb45d4c849b4c2ee0"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.323982 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f4fd9547-lhpct" event={"ID":"fca18973-3724-49c5-b8c4-cf6beb66c288","Type":"ContainerStarted","Data":"236053ae734a01a9f2ac22c12a153508cd784597dbf97b70fdc6e89668916550"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.324074 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74f4fd9547-lhpct" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerName="horizon-log" containerID="cri-o://9b2735e034dc026b388dcd0a7bf7297e976eda33e67baeccb45d4c849b4c2ee0" gracePeriod=30 Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.324325 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74f4fd9547-lhpct" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerName="horizon" containerID="cri-o://236053ae734a01a9f2ac22c12a153508cd784597dbf97b70fdc6e89668916550" gracePeriod=30 Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.338893 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.340445 4580 generic.go:334] "Generic (PLEG): container finished" podID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerID="4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9" exitCode=0 Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.340791 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cj96v" event={"ID":"11060a9d-34a1-4ac1-baa8-a478351504f3","Type":"ContainerDied","Data":"4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.343501 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cj96v" event={"ID":"11060a9d-34a1-4ac1-baa8-a478351504f3","Type":"ContainerStarted","Data":"d1e56f116e8efc9b7ddc72285a618f56bcd9f2b98d52ce76fcb37570d6f6f4bf"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.348562 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a68b6dc-1793-49ee-b68a-ded144ce21d9","Type":"ContainerStarted","Data":"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.350412 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8lnk9" event={"ID":"7d306f46-ea22-4b07-a18c-5134b125fa49","Type":"ContainerStarted","Data":"506c545c273d5e05ea870f704fbe9f16c7ad2d66f8b25ee56cfd9fdb08f16a4b"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.352192 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6868cd5fd5-ct7dn" event={"ID":"db7a06f9-1a77-4a21-ac05-0c73655fa8d0","Type":"ContainerStarted","Data":"d229c75be6283d85033aa466caeb340478b38a876d8c032f1e88e609bb362a9a"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.352217 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6868cd5fd5-ct7dn" event={"ID":"db7a06f9-1a77-4a21-ac05-0c73655fa8d0","Type":"ContainerStarted","Data":"97ec417f47abd1751148fc4f57c93a9cb3f0ceca6a44d52a0efd7ffb9cca693a"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.352282 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6868cd5fd5-ct7dn" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerName="horizon-log" containerID="cri-o://97ec417f47abd1751148fc4f57c93a9cb3f0ceca6a44d52a0efd7ffb9cca693a" gracePeriod=30 Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.352308 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6868cd5fd5-ct7dn" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerName="horizon" containerID="cri-o://d229c75be6283d85033aa466caeb340478b38a876d8c032f1e88e609bb362a9a" gracePeriod=30 Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.355705 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s9pd9" podStartSLOduration=7.481694693 podStartE2EDuration="28.355688856s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="2026-01-12 13:21:26.29876461 +0000 UTC m=+885.342983301" lastFinishedPulling="2026-01-12 13:21:47.172758773 +0000 UTC m=+906.216977464" observedRunningTime="2026-01-12 13:21:50.345231979 +0000 UTC m=+909.389450669" watchObservedRunningTime="2026-01-12 13:21:50.355688856 +0000 UTC m=+909.399907546" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.358719 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8tzg" event={"ID":"0ab93f8e-1504-47d9-af38-197cbcc54feb","Type":"ContainerStarted","Data":"c59dd887a0142c00616d2cc514e1d7393d73aeefaea776d73f2c7a09f5026649"} Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.359701 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-httpd-config\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.359754 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-internal-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.362787 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-combined-ca-bundle\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.362882 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-ovndb-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.362951 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-config\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.362972 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfb8x\" (UniqueName: \"kubernetes.io/projected/d56cc382-ea8e-4cea-829a-80335a2b71c9-kube-api-access-hfb8x\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.362997 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-public-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.368199 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-public-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.368903 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74f4fd9547-lhpct" podStartSLOduration=6.07309665 podStartE2EDuration="26.36889227s" podCreationTimestamp="2026-01-12 13:21:24 +0000 UTC" firstStartedPulling="2026-01-12 13:21:26.892272518 +0000 UTC m=+885.936491208" lastFinishedPulling="2026-01-12 13:21:47.188068138 +0000 UTC m=+906.232286828" observedRunningTime="2026-01-12 13:21:50.363154749 +0000 UTC m=+909.407373438" watchObservedRunningTime="2026-01-12 13:21:50.36889227 +0000 UTC m=+909.413110960" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.373260 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-httpd-config\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.376580 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-combined-ca-bundle\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.377038 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-internal-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.385199 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-ovndb-tls-certs\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.385410 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56cc382-ea8e-4cea-829a-80335a2b71c9-config\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.400741 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfb8x\" (UniqueName: \"kubernetes.io/projected/d56cc382-ea8e-4cea-829a-80335a2b71c9-kube-api-access-hfb8x\") pod \"neutron-5c66d9fb7c-tgbgc\" (UID: \"d56cc382-ea8e-4cea-829a-80335a2b71c9\") " pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.417891 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6868cd5fd5-ct7dn" podStartSLOduration=5.968394292 podStartE2EDuration="28.417876451s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="2026-01-12 13:21:26.292013523 +0000 UTC m=+885.336232213" lastFinishedPulling="2026-01-12 13:21:48.741495682 +0000 UTC m=+907.785714372" observedRunningTime="2026-01-12 13:21:50.405543664 +0000 UTC m=+909.449762355" watchObservedRunningTime="2026-01-12 13:21:50.417876451 +0000 UTC m=+909.462095141" Jan 12 13:21:50 crc kubenswrapper[4580]: I0112 13:21:50.542008 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.271476 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-2f659" podUID="15e17da6-3f67-4b22-8c60-02b342fece99" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.122:5353: i/o timeout" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.383264 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c66d9fb7c-tgbgc"] Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.405488 4580 generic.go:334] "Generic (PLEG): container finished" podID="0ab93f8e-1504-47d9-af38-197cbcc54feb" containerID="674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761" exitCode=0 Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.405961 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8tzg" event={"ID":"0ab93f8e-1504-47d9-af38-197cbcc54feb","Type":"ContainerDied","Data":"674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.411854 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc934c6e-8cf2-42f0-97bc-22537818cd51","Type":"ContainerStarted","Data":"70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.411910 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc934c6e-8cf2-42f0-97bc-22537818cd51","Type":"ContainerStarted","Data":"870e04df1cb487814ababdaae2f1c1f099c23263c78c9dc6f73c5191466248e5"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.426381 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a68b6dc-1793-49ee-b68a-ded144ce21d9","Type":"ContainerStarted","Data":"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.426556 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerName="glance-log" containerID="cri-o://18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c" gracePeriod=30 Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.426987 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerName="glance-httpd" containerID="cri-o://c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce" gracePeriod=30 Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.474472 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b765ff94-66rkz" event={"ID":"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2","Type":"ContainerStarted","Data":"59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.474726 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b765ff94-66rkz" event={"ID":"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2","Type":"ContainerStarted","Data":"0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.482909 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84c1ab4b-8921-4f4a-88dd-adf6e224d62c","Type":"ContainerStarted","Data":"4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.487662 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596bbb8b6-5jfvl" event={"ID":"5b622df8-141e-468d-8f8d-86622f286566","Type":"ContainerStarted","Data":"cfd0f8fb73bc0f8bfbda09f6ff39be45a13c3eee5aa8ecf09832d7a2b96cdaa7"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.487706 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596bbb8b6-5jfvl" event={"ID":"5b622df8-141e-468d-8f8d-86622f286566","Type":"ContainerStarted","Data":"9d8b606903131cb31075845e04f2a766ad2985affb6777d5c929dc3513c2d8bc"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.489293 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.498121 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bb57d5f45-mb7xb" event={"ID":"82d9d66d-ff92-4164-96a9-c82a919cce00","Type":"ContainerStarted","Data":"9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.498233 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bb57d5f45-mb7xb" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerName="horizon-log" containerID="cri-o://4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf" gracePeriod=30 Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.498538 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bb57d5f45-mb7xb" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerName="horizon" containerID="cri-o://9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96" gracePeriod=30 Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.513471 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8699b457dd-z2fkt" event={"ID":"92d059e4-ff2b-4ecc-ae14-6367d54e720f","Type":"ContainerStarted","Data":"ace99e83f1dcbd310854c9b75c7399ecdcc84c1f6f95ced19ecdeebe5aa2acc7"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.530181 4580 generic.go:334] "Generic (PLEG): container finished" podID="edd41e34-6733-4a77-b99b-3ab0895b124a" containerID="cfe69420301424bb8260648e61122d02674c853997aaf692f028749778fb265c" exitCode=0 Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.530278 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s9pd9" event={"ID":"edd41e34-6733-4a77-b99b-3ab0895b124a","Type":"ContainerDied","Data":"cfe69420301424bb8260648e61122d02674c853997aaf692f028749778fb265c"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.543588 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.543765 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.549567 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8lnk9" event={"ID":"7d306f46-ea22-4b07-a18c-5134b125fa49","Type":"ContainerStarted","Data":"2acfe507773f35d2c2bfcd63687b7c20dec48bb7a0b779a36b7881a1fd8cd444"} Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.561656 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-54b765ff94-66rkz" podStartSLOduration=20.561636771 podStartE2EDuration="20.561636771s" podCreationTimestamp="2026-01-12 13:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:51.522685139 +0000 UTC m=+910.566903830" watchObservedRunningTime="2026-01-12 13:21:51.561636771 +0000 UTC m=+910.605855460" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.624160 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=22.624132495 podStartE2EDuration="22.624132495s" podCreationTimestamp="2026-01-12 13:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:51.542247252 +0000 UTC m=+910.586465942" watchObservedRunningTime="2026-01-12 13:21:51.624132495 +0000 UTC m=+910.668351185" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.641907 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.641989 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.664922 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zfsd9"] Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.667162 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.678387 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfsd9"] Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.693403 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bb57d5f45-mb7xb" podStartSLOduration=7.178223777 podStartE2EDuration="29.693385598s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="2026-01-12 13:21:26.28764677 +0000 UTC m=+885.331865460" lastFinishedPulling="2026-01-12 13:21:48.802808591 +0000 UTC m=+907.847027281" observedRunningTime="2026-01-12 13:21:51.596235706 +0000 UTC m=+910.640454406" watchObservedRunningTime="2026-01-12 13:21:51.693385598 +0000 UTC m=+910.737604288" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.706416 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-596bbb8b6-5jfvl" podStartSLOduration=4.706404113 podStartE2EDuration="4.706404113s" podCreationTimestamp="2026-01-12 13:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:51.619462301 +0000 UTC m=+910.663680991" watchObservedRunningTime="2026-01-12 13:21:51.706404113 +0000 UTC m=+910.750622804" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.712394 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8699b457dd-z2fkt" podStartSLOduration=20.712386456 podStartE2EDuration="20.712386456s" podCreationTimestamp="2026-01-12 13:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:51.637403865 +0000 UTC m=+910.681622565" watchObservedRunningTime="2026-01-12 13:21:51.712386456 +0000 UTC m=+910.756605146" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.715993 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8lnk9" podStartSLOduration=11.715986306 podStartE2EDuration="11.715986306s" podCreationTimestamp="2026-01-12 13:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:51.665289694 +0000 UTC m=+910.709508385" watchObservedRunningTime="2026-01-12 13:21:51.715986306 +0000 UTC m=+910.760204996" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.832163 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-catalog-content\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.832475 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssp42\" (UniqueName: \"kubernetes.io/projected/70ecad68-8604-45ba-84e5-4a0aa1d7464a-kube-api-access-ssp42\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.832547 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-utilities\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.934482 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-catalog-content\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.934522 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssp42\" (UniqueName: \"kubernetes.io/projected/70ecad68-8604-45ba-84e5-4a0aa1d7464a-kube-api-access-ssp42\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.934567 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-utilities\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.934948 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-catalog-content\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.935004 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-utilities\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:51 crc kubenswrapper[4580]: I0112 13:21:51.950778 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssp42\" (UniqueName: \"kubernetes.io/projected/70ecad68-8604-45ba-84e5-4a0aa1d7464a-kube-api-access-ssp42\") pod \"redhat-operators-zfsd9\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.072298 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.391987 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.603371 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c66d9fb7c-tgbgc" event={"ID":"d56cc382-ea8e-4cea-829a-80335a2b71c9","Type":"ContainerStarted","Data":"0656837df21ab92ad5de5f185b3c2fc3e701b4c6e4b4389f8d26bf2a7187f2b3"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.603419 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c66d9fb7c-tgbgc" event={"ID":"d56cc382-ea8e-4cea-829a-80335a2b71c9","Type":"ContainerStarted","Data":"6b8b548f787251fd85ccc9f5e396e511e33c23020bb4b386b0a85cafea13f243"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.603430 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c66d9fb7c-tgbgc" event={"ID":"d56cc382-ea8e-4cea-829a-80335a2b71c9","Type":"ContainerStarted","Data":"e0bb137848327f74d502b263b53d3457897f3e9706170ba6c4c91024ec8beb3d"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.604729 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.630618 4580 generic.go:334] "Generic (PLEG): container finished" podID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerID="69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181" exitCode=0 Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.630678 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cj96v" event={"ID":"11060a9d-34a1-4ac1-baa8-a478351504f3","Type":"ContainerDied","Data":"69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.639504 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c66d9fb7c-tgbgc" podStartSLOduration=2.639491443 podStartE2EDuration="2.639491443s" podCreationTimestamp="2026-01-12 13:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:52.632073561 +0000 UTC m=+911.676292241" watchObservedRunningTime="2026-01-12 13:21:52.639491443 +0000 UTC m=+911.683710134" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.642838 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.652519 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc934c6e-8cf2-42f0-97bc-22537818cd51","Type":"ContainerStarted","Data":"0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.659491 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.667613 4580 generic.go:334] "Generic (PLEG): container finished" podID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerID="c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce" exitCode=0 Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.667647 4580 generic.go:334] "Generic (PLEG): container finished" podID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerID="18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c" exitCode=143 Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.667701 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a68b6dc-1793-49ee-b68a-ded144ce21d9","Type":"ContainerDied","Data":"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.667735 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a68b6dc-1793-49ee-b68a-ded144ce21d9","Type":"ContainerDied","Data":"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.667750 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a68b6dc-1793-49ee-b68a-ded144ce21d9","Type":"ContainerDied","Data":"693ff1c56c1740c2f6e7baa61c4fd806bd378524f7c1b0815c148857568f1f28"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.667769 4580 scope.go:117] "RemoveContainer" containerID="c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.667906 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.696826 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8tzg" event={"ID":"0ab93f8e-1504-47d9-af38-197cbcc54feb","Type":"ContainerStarted","Data":"5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396"} Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.696874 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.707829 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.707816261 podStartE2EDuration="4.707816261s" podCreationTimestamp="2026-01-12 13:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:52.698118611 +0000 UTC m=+911.742337301" watchObservedRunningTime="2026-01-12 13:21:52.707816261 +0000 UTC m=+911.752034950" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.740126 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685444497c-q8tzg" podStartSLOduration=5.740082457 podStartE2EDuration="5.740082457s" podCreationTimestamp="2026-01-12 13:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:52.738563171 +0000 UTC m=+911.782781860" watchObservedRunningTime="2026-01-12 13:21:52.740082457 +0000 UTC m=+911.784301147" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.742418 4580 scope.go:117] "RemoveContainer" containerID="18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.763475 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-internal-tls-certs\") pod \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.763598 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.763719 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgmsn\" (UniqueName: \"kubernetes.io/projected/2a68b6dc-1793-49ee-b68a-ded144ce21d9-kube-api-access-jgmsn\") pod \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.763747 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-logs\") pod \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.763766 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-scripts\") pod \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.763787 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-combined-ca-bundle\") pod \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.763917 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-config-data\") pod \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.763967 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-httpd-run\") pod \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\" (UID: \"2a68b6dc-1793-49ee-b68a-ded144ce21d9\") " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.768292 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfsd9"] Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.771838 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-logs" (OuterVolumeSpecName: "logs") pod "2a68b6dc-1793-49ee-b68a-ded144ce21d9" (UID: "2a68b6dc-1793-49ee-b68a-ded144ce21d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.778429 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2a68b6dc-1793-49ee-b68a-ded144ce21d9" (UID: "2a68b6dc-1793-49ee-b68a-ded144ce21d9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.793258 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-scripts" (OuterVolumeSpecName: "scripts") pod "2a68b6dc-1793-49ee-b68a-ded144ce21d9" (UID: "2a68b6dc-1793-49ee-b68a-ded144ce21d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.810876 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a68b6dc-1793-49ee-b68a-ded144ce21d9-kube-api-access-jgmsn" (OuterVolumeSpecName: "kube-api-access-jgmsn") pod "2a68b6dc-1793-49ee-b68a-ded144ce21d9" (UID: "2a68b6dc-1793-49ee-b68a-ded144ce21d9"). InnerVolumeSpecName "kube-api-access-jgmsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.817361 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "2a68b6dc-1793-49ee-b68a-ded144ce21d9" (UID: "2a68b6dc-1793-49ee-b68a-ded144ce21d9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.838234 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-config-data" (OuterVolumeSpecName: "config-data") pod "2a68b6dc-1793-49ee-b68a-ded144ce21d9" (UID: "2a68b6dc-1793-49ee-b68a-ded144ce21d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.867601 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgmsn\" (UniqueName: \"kubernetes.io/projected/2a68b6dc-1793-49ee-b68a-ded144ce21d9-kube-api-access-jgmsn\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.867706 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.867761 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.867810 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.867855 4580 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a68b6dc-1793-49ee-b68a-ded144ce21d9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.867908 4580 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.887769 4580 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.914244 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a68b6dc-1793-49ee-b68a-ded144ce21d9" (UID: "2a68b6dc-1793-49ee-b68a-ded144ce21d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.935346 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2a68b6dc-1793-49ee-b68a-ded144ce21d9" (UID: "2a68b6dc-1793-49ee-b68a-ded144ce21d9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.970071 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.970209 4580 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a68b6dc-1793-49ee-b68a-ded144ce21d9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.970277 4580 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.973308 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.973343 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.994172 4580 scope.go:117] "RemoveContainer" containerID="c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce" Jan 12 13:21:52 crc kubenswrapper[4580]: E0112 13:21:52.994796 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce\": container with ID starting with c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce not found: ID does not exist" containerID="c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.994870 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce"} err="failed to get container status \"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce\": rpc error: code = NotFound desc = could not find container \"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce\": container with ID starting with c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce not found: ID does not exist" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.994903 4580 scope.go:117] "RemoveContainer" containerID="18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c" Jan 12 13:21:52 crc kubenswrapper[4580]: E0112 13:21:52.998287 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c\": container with ID starting with 18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c not found: ID does not exist" containerID="18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.998325 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c"} err="failed to get container status \"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c\": rpc error: code = NotFound desc = could not find container \"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c\": container with ID starting with 18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c not found: ID does not exist" Jan 12 13:21:52 crc kubenswrapper[4580]: I0112 13:21:52.998364 4580 scope.go:117] "RemoveContainer" containerID="c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.001677 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce"} err="failed to get container status \"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce\": rpc error: code = NotFound desc = could not find container \"c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce\": container with ID starting with c0241a82a2a037d07ccda36079d708daa510e2cbe5494cf93b26ae8dda225dce not found: ID does not exist" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.001720 4580 scope.go:117] "RemoveContainer" containerID="18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.003256 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c"} err="failed to get container status \"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c\": rpc error: code = NotFound desc = could not find container \"18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c\": container with ID starting with 18db10ea58cf42114de5c76e0a217d6a0cd0729c13b79688f86776a27f2c212c not found: ID does not exist" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.041152 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.044525 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.081278 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:53 crc kubenswrapper[4580]: E0112 13:21:53.081733 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerName="glance-httpd" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.081753 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerName="glance-httpd" Jan 12 13:21:53 crc kubenswrapper[4580]: E0112 13:21:53.081776 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerName="glance-log" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.081782 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerName="glance-log" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.081975 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerName="glance-log" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.082008 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" containerName="glance-httpd" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.083000 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.096528 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.096630 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.157168 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.175417 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-logs\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.175474 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.175514 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.175613 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.175640 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.175755 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.175899 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l87kv\" (UniqueName: \"kubernetes.io/projected/52d5c384-ad20-413e-a8ec-183b114d9901-kube-api-access-l87kv\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.175994 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.277440 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.277594 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l87kv\" (UniqueName: \"kubernetes.io/projected/52d5c384-ad20-413e-a8ec-183b114d9901-kube-api-access-l87kv\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.277643 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.277677 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-logs\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.277700 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.277721 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.277762 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.277781 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.279122 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-logs\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.279454 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.279765 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.285972 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.289316 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.290042 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.293398 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.297810 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l87kv\" (UniqueName: \"kubernetes.io/projected/52d5c384-ad20-413e-a8ec-183b114d9901-kube-api-access-l87kv\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.318740 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.320303 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.321391 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a68b6dc-1793-49ee-b68a-ded144ce21d9" path="/var/lib/kubelet/pods/2a68b6dc-1793-49ee-b68a-ded144ce21d9/volumes" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.382785 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-combined-ca-bundle\") pod \"edd41e34-6733-4a77-b99b-3ab0895b124a\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.382838 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-config-data\") pod \"edd41e34-6733-4a77-b99b-3ab0895b124a\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.382926 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhbdm\" (UniqueName: \"kubernetes.io/projected/edd41e34-6733-4a77-b99b-3ab0895b124a-kube-api-access-xhbdm\") pod \"edd41e34-6733-4a77-b99b-3ab0895b124a\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.382996 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-scripts\") pod \"edd41e34-6733-4a77-b99b-3ab0895b124a\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.383034 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd41e34-6733-4a77-b99b-3ab0895b124a-logs\") pod \"edd41e34-6733-4a77-b99b-3ab0895b124a\" (UID: \"edd41e34-6733-4a77-b99b-3ab0895b124a\") " Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.386653 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd41e34-6733-4a77-b99b-3ab0895b124a-logs" (OuterVolumeSpecName: "logs") pod "edd41e34-6733-4a77-b99b-3ab0895b124a" (UID: "edd41e34-6733-4a77-b99b-3ab0895b124a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.397821 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-scripts" (OuterVolumeSpecName: "scripts") pod "edd41e34-6733-4a77-b99b-3ab0895b124a" (UID: "edd41e34-6733-4a77-b99b-3ab0895b124a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.409120 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd41e34-6733-4a77-b99b-3ab0895b124a-kube-api-access-xhbdm" (OuterVolumeSpecName: "kube-api-access-xhbdm") pod "edd41e34-6733-4a77-b99b-3ab0895b124a" (UID: "edd41e34-6733-4a77-b99b-3ab0895b124a"). InnerVolumeSpecName "kube-api-access-xhbdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.429211 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-config-data" (OuterVolumeSpecName: "config-data") pod "edd41e34-6733-4a77-b99b-3ab0895b124a" (UID: "edd41e34-6733-4a77-b99b-3ab0895b124a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.447466 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edd41e34-6733-4a77-b99b-3ab0895b124a" (UID: "edd41e34-6733-4a77-b99b-3ab0895b124a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.461085 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.464726 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ffb74c678-h5ddl"] Jan 12 13:21:53 crc kubenswrapper[4580]: E0112 13:21:53.465453 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd41e34-6733-4a77-b99b-3ab0895b124a" containerName="placement-db-sync" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.466377 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd41e34-6733-4a77-b99b-3ab0895b124a" containerName="placement-db-sync" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.477279 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd41e34-6733-4a77-b99b-3ab0895b124a" containerName="placement-db-sync" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.479649 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.481713 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.481835 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.491117 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.491143 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.491153 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhbdm\" (UniqueName: \"kubernetes.io/projected/edd41e34-6733-4a77-b99b-3ab0895b124a-kube-api-access-xhbdm\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.491166 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd41e34-6733-4a77-b99b-3ab0895b124a-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.491178 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd41e34-6733-4a77-b99b-3ab0895b124a-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.546782 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ffb74c678-h5ddl"] Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.593692 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-logs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.593746 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-public-tls-certs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.593826 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-config-data\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.593869 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvvsb\" (UniqueName: \"kubernetes.io/projected/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-kube-api-access-jvvsb\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.593930 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-internal-tls-certs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.593964 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-scripts\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.593983 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-combined-ca-bundle\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.695493 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-internal-tls-certs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.695533 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-scripts\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.695557 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-combined-ca-bundle\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.695585 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-logs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.695606 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-public-tls-certs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.696261 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-config-data\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.696303 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvvsb\" (UniqueName: \"kubernetes.io/projected/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-kube-api-access-jvvsb\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.696879 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-logs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.710791 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-internal-tls-certs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.715511 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-combined-ca-bundle\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.715878 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-config-data\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.721376 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-scripts\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.726269 4580 generic.go:334] "Generic (PLEG): container finished" podID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerID="3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803" exitCode=0 Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.727293 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfsd9" event={"ID":"70ecad68-8604-45ba-84e5-4a0aa1d7464a","Type":"ContainerDied","Data":"3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803"} Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.727428 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfsd9" event={"ID":"70ecad68-8604-45ba-84e5-4a0aa1d7464a","Type":"ContainerStarted","Data":"75257d67a7c6550c96fb50afd4366407695134d61e9eb3089dc23d617cd9a06d"} Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.728432 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-public-tls-certs\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.754577 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvvsb\" (UniqueName: \"kubernetes.io/projected/89dcb711-9d18-46e9-9f17-280f0f4c0e1a-kube-api-access-jvvsb\") pod \"placement-7ffb74c678-h5ddl\" (UID: \"89dcb711-9d18-46e9-9f17-280f0f4c0e1a\") " pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.768308 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s9pd9" event={"ID":"edd41e34-6733-4a77-b99b-3ab0895b124a","Type":"ContainerDied","Data":"68f70eca28d9d0c3dd9e815553b9c89748d836bb3ed8a8c899c339dca792c27f"} Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.768342 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f70eca28d9d0c3dd9e815553b9c89748d836bb3ed8a8c899c339dca792c27f" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.768415 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s9pd9" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.795692 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cj96v" event={"ID":"11060a9d-34a1-4ac1-baa8-a478351504f3","Type":"ContainerStarted","Data":"0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6"} Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.820438 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cj96v" podStartSLOduration=19.880567855 podStartE2EDuration="22.820418798s" podCreationTimestamp="2026-01-12 13:21:31 +0000 UTC" firstStartedPulling="2026-01-12 13:21:50.345742068 +0000 UTC m=+909.389960758" lastFinishedPulling="2026-01-12 13:21:53.285593011 +0000 UTC m=+912.329811701" observedRunningTime="2026-01-12 13:21:53.818039804 +0000 UTC m=+912.862258493" watchObservedRunningTime="2026-01-12 13:21:53.820418798 +0000 UTC m=+912.864637487" Jan 12 13:21:53 crc kubenswrapper[4580]: I0112 13:21:53.899591 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:54 crc kubenswrapper[4580]: I0112 13:21:54.054408 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jtmvw" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="registry-server" probeResult="failure" output=< Jan 12 13:21:54 crc kubenswrapper[4580]: timeout: failed to connect service ":50051" within 1s Jan 12 13:21:54 crc kubenswrapper[4580]: > Jan 12 13:21:54 crc kubenswrapper[4580]: I0112 13:21:54.201062 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:21:54 crc kubenswrapper[4580]: I0112 13:21:54.429120 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:21:54 crc kubenswrapper[4580]: I0112 13:21:54.567968 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ffb74c678-h5ddl"] Jan 12 13:21:54 crc kubenswrapper[4580]: I0112 13:21:54.825588 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d5c384-ad20-413e-a8ec-183b114d9901","Type":"ContainerStarted","Data":"39f8a7c780c9bde2261291778e117bd084b1ddd77d508397ee3991d9463883de"} Jan 12 13:21:54 crc kubenswrapper[4580]: I0112 13:21:54.837093 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffb74c678-h5ddl" event={"ID":"89dcb711-9d18-46e9-9f17-280f0f4c0e1a","Type":"ContainerStarted","Data":"e51d55e38cd5fb4c4e5875ba261d37dda7d9829c86f24752a5b4b8541be550f2"} Jan 12 13:21:55 crc kubenswrapper[4580]: I0112 13:21:55.864292 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfsd9" event={"ID":"70ecad68-8604-45ba-84e5-4a0aa1d7464a","Type":"ContainerStarted","Data":"cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9"} Jan 12 13:21:55 crc kubenswrapper[4580]: I0112 13:21:55.867411 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d5c384-ad20-413e-a8ec-183b114d9901","Type":"ContainerStarted","Data":"cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0"} Jan 12 13:21:55 crc kubenswrapper[4580]: I0112 13:21:55.884229 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffb74c678-h5ddl" event={"ID":"89dcb711-9d18-46e9-9f17-280f0f4c0e1a","Type":"ContainerStarted","Data":"89e17d28d6baad9f0551d3e9d05a633d6015f89e8177a9dd0fc65dc8a8044852"} Jan 12 13:21:55 crc kubenswrapper[4580]: I0112 13:21:55.884269 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffb74c678-h5ddl" event={"ID":"89dcb711-9d18-46e9-9f17-280f0f4c0e1a","Type":"ContainerStarted","Data":"0e6a6915a1271f9bcf7dbd0fd25dbc76a40f68098cdfc43d583843445aa2302e"} Jan 12 13:21:55 crc kubenswrapper[4580]: I0112 13:21:55.884688 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:55 crc kubenswrapper[4580]: I0112 13:21:55.884788 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:21:55 crc kubenswrapper[4580]: I0112 13:21:55.906423 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ffb74c678-h5ddl" podStartSLOduration=2.90640738 podStartE2EDuration="2.90640738s" podCreationTimestamp="2026-01-12 13:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:55.902311506 +0000 UTC m=+914.946530196" watchObservedRunningTime="2026-01-12 13:21:55.90640738 +0000 UTC m=+914.950626070" Jan 12 13:21:56 crc kubenswrapper[4580]: I0112 13:21:56.895069 4580 generic.go:334] "Generic (PLEG): container finished" podID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerID="cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9" exitCode=0 Jan 12 13:21:56 crc kubenswrapper[4580]: I0112 13:21:56.896534 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfsd9" event={"ID":"70ecad68-8604-45ba-84e5-4a0aa1d7464a","Type":"ContainerDied","Data":"cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9"} Jan 12 13:21:57 crc kubenswrapper[4580]: I0112 13:21:57.907279 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d5c384-ad20-413e-a8ec-183b114d9901","Type":"ContainerStarted","Data":"370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd"} Jan 12 13:21:57 crc kubenswrapper[4580]: I0112 13:21:57.911405 4580 generic.go:334] "Generic (PLEG): container finished" podID="7d306f46-ea22-4b07-a18c-5134b125fa49" containerID="2acfe507773f35d2c2bfcd63687b7c20dec48bb7a0b779a36b7881a1fd8cd444" exitCode=0 Jan 12 13:21:57 crc kubenswrapper[4580]: I0112 13:21:57.911446 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8lnk9" event={"ID":"7d306f46-ea22-4b07-a18c-5134b125fa49","Type":"ContainerDied","Data":"2acfe507773f35d2c2bfcd63687b7c20dec48bb7a0b779a36b7881a1fd8cd444"} Jan 12 13:21:57 crc kubenswrapper[4580]: I0112 13:21:57.927054 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.927043134 podStartE2EDuration="4.927043134s" podCreationTimestamp="2026-01-12 13:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:21:57.926425932 +0000 UTC m=+916.970644623" watchObservedRunningTime="2026-01-12 13:21:57.927043134 +0000 UTC m=+916.971261824" Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.002602 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.059485 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-s5ztm"] Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.059843 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" podUID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerName="dnsmasq-dns" containerID="cri-o://84ee4eecce5f3fd953e6b9c0cdd378e9dcf259bd7fa86362302dbf0ac0c46779" gracePeriod=10 Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.190682 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" podUID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.606336 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.606488 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.631612 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.670001 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.930645 4580 generic.go:334] "Generic (PLEG): container finished" podID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerID="84ee4eecce5f3fd953e6b9c0cdd378e9dcf259bd7fa86362302dbf0ac0c46779" exitCode=0 Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.930728 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" event={"ID":"cd9391a2-339e-4eed-84df-164e7eae3e0c","Type":"ContainerDied","Data":"84ee4eecce5f3fd953e6b9c0cdd378e9dcf259bd7fa86362302dbf0ac0c46779"} Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.931462 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 12 13:21:58 crc kubenswrapper[4580]: I0112 13:21:58.931682 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 12 13:22:00 crc kubenswrapper[4580]: I0112 13:22:00.940117 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 12 13:22:00 crc kubenswrapper[4580]: I0112 13:22:00.941920 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 12 13:22:01 crc kubenswrapper[4580]: I0112 13:22:01.546037 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b765ff94-66rkz" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 12 13:22:01 crc kubenswrapper[4580]: I0112 13:22:01.642419 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8699b457dd-z2fkt" podUID="92d059e4-ff2b-4ecc-ae14-6367d54e720f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 12 13:22:01 crc kubenswrapper[4580]: I0112 13:22:01.688267 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:22:01 crc kubenswrapper[4580]: I0112 13:22:01.688304 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:22:01 crc kubenswrapper[4580]: I0112 13:22:01.761134 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.035523 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.087850 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cj96v"] Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.803011 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.879119 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.924856 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-credential-keys\") pod \"7d306f46-ea22-4b07-a18c-5134b125fa49\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925018 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-svc\") pod \"cd9391a2-339e-4eed-84df-164e7eae3e0c\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925133 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-sb\") pod \"cd9391a2-339e-4eed-84df-164e7eae3e0c\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925221 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-swift-storage-0\") pod \"cd9391a2-339e-4eed-84df-164e7eae3e0c\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925298 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p8fd\" (UniqueName: \"kubernetes.io/projected/cd9391a2-339e-4eed-84df-164e7eae3e0c-kube-api-access-7p8fd\") pod \"cd9391a2-339e-4eed-84df-164e7eae3e0c\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925364 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-config\") pod \"cd9391a2-339e-4eed-84df-164e7eae3e0c\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925454 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-fernet-keys\") pod \"7d306f46-ea22-4b07-a18c-5134b125fa49\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925556 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-combined-ca-bundle\") pod \"7d306f46-ea22-4b07-a18c-5134b125fa49\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925714 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-config-data\") pod \"7d306f46-ea22-4b07-a18c-5134b125fa49\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925788 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-nb\") pod \"cd9391a2-339e-4eed-84df-164e7eae3e0c\" (UID: \"cd9391a2-339e-4eed-84df-164e7eae3e0c\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.925908 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-scripts\") pod \"7d306f46-ea22-4b07-a18c-5134b125fa49\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.926039 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7bdq\" (UniqueName: \"kubernetes.io/projected/7d306f46-ea22-4b07-a18c-5134b125fa49-kube-api-access-v7bdq\") pod \"7d306f46-ea22-4b07-a18c-5134b125fa49\" (UID: \"7d306f46-ea22-4b07-a18c-5134b125fa49\") " Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.931693 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d306f46-ea22-4b07-a18c-5134b125fa49-kube-api-access-v7bdq" (OuterVolumeSpecName: "kube-api-access-v7bdq") pod "7d306f46-ea22-4b07-a18c-5134b125fa49" (UID: "7d306f46-ea22-4b07-a18c-5134b125fa49"). InnerVolumeSpecName "kube-api-access-v7bdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.935268 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7d306f46-ea22-4b07-a18c-5134b125fa49" (UID: "7d306f46-ea22-4b07-a18c-5134b125fa49"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.939398 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7d306f46-ea22-4b07-a18c-5134b125fa49" (UID: "7d306f46-ea22-4b07-a18c-5134b125fa49"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.962786 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9391a2-339e-4eed-84df-164e7eae3e0c-kube-api-access-7p8fd" (OuterVolumeSpecName: "kube-api-access-7p8fd") pod "cd9391a2-339e-4eed-84df-164e7eae3e0c" (UID: "cd9391a2-339e-4eed-84df-164e7eae3e0c"). InnerVolumeSpecName "kube-api-access-7p8fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.970016 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-scripts" (OuterVolumeSpecName: "scripts") pod "7d306f46-ea22-4b07-a18c-5134b125fa49" (UID: "7d306f46-ea22-4b07-a18c-5134b125fa49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:02 crc kubenswrapper[4580]: I0112 13:22:02.988794 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d306f46-ea22-4b07-a18c-5134b125fa49" (UID: "7d306f46-ea22-4b07-a18c-5134b125fa49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.002175 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-config-data" (OuterVolumeSpecName: "config-data") pod "7d306f46-ea22-4b07-a18c-5134b125fa49" (UID: "7d306f46-ea22-4b07-a18c-5134b125fa49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.002683 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd9391a2-339e-4eed-84df-164e7eae3e0c" (UID: "cd9391a2-339e-4eed-84df-164e7eae3e0c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.008612 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd9391a2-339e-4eed-84df-164e7eae3e0c" (UID: "cd9391a2-339e-4eed-84df-164e7eae3e0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.009449 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfsd9" event={"ID":"70ecad68-8604-45ba-84e5-4a0aa1d7464a","Type":"ContainerStarted","Data":"bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3"} Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.018516 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-config" (OuterVolumeSpecName: "config") pod "cd9391a2-339e-4eed-84df-164e7eae3e0c" (UID: "cd9391a2-339e-4eed-84df-164e7eae3e0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.018919 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" event={"ID":"cd9391a2-339e-4eed-84df-164e7eae3e0c","Type":"ContainerDied","Data":"fccaaecb811e33efc74d8df4a58794f8702b400645eaae6fd72ba961e01fe780"} Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.019126 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-s5ztm" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.019225 4580 scope.go:117] "RemoveContainer" containerID="84ee4eecce5f3fd953e6b9c0cdd378e9dcf259bd7fa86362302dbf0ac0c46779" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028786 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028809 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028821 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7bdq\" (UniqueName: \"kubernetes.io/projected/7d306f46-ea22-4b07-a18c-5134b125fa49-kube-api-access-v7bdq\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028831 4580 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028844 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028853 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028862 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p8fd\" (UniqueName: \"kubernetes.io/projected/cd9391a2-339e-4eed-84df-164e7eae3e0c-kube-api-access-7p8fd\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028871 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028879 4580 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.028887 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d306f46-ea22-4b07-a18c-5134b125fa49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.036984 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zfsd9" podStartSLOduration=3.159342196 podStartE2EDuration="12.036945736s" podCreationTimestamp="2026-01-12 13:21:51 +0000 UTC" firstStartedPulling="2026-01-12 13:21:53.738241707 +0000 UTC m=+912.782460397" lastFinishedPulling="2026-01-12 13:22:02.615845247 +0000 UTC m=+921.660063937" observedRunningTime="2026-01-12 13:22:03.025018593 +0000 UTC m=+922.069237283" watchObservedRunningTime="2026-01-12 13:22:03.036945736 +0000 UTC m=+922.081164426" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.037974 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.038422 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd9391a2-339e-4eed-84df-164e7eae3e0c" (UID: "cd9391a2-339e-4eed-84df-164e7eae3e0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.040699 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd9391a2-339e-4eed-84df-164e7eae3e0c" (UID: "cd9391a2-339e-4eed-84df-164e7eae3e0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.040865 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8lnk9" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.040909 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8lnk9" event={"ID":"7d306f46-ea22-4b07-a18c-5134b125fa49","Type":"ContainerDied","Data":"506c545c273d5e05ea870f704fbe9f16c7ad2d66f8b25ee56cfd9fdb08f16a4b"} Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.040970 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="506c545c273d5e05ea870f704fbe9f16c7ad2d66f8b25ee56cfd9fdb08f16a4b" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.049555 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84c1ab4b-8921-4f4a-88dd-adf6e224d62c","Type":"ContainerStarted","Data":"b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa"} Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.057424 4580 scope.go:117] "RemoveContainer" containerID="e334e4170a5a3971d5c5fba63a4bb0a5f8f99e5e8bdebee6a470cc548a571188" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.093537 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.130520 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.130550 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9391a2-339e-4eed-84df-164e7eae3e0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.349377 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-s5ztm"] Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.356445 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-s5ztm"] Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.462292 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.462408 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.494394 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.504403 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.911324 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-696c64b546-cw888"] Jan 12 13:22:03 crc kubenswrapper[4580]: E0112 13:22:03.911900 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d306f46-ea22-4b07-a18c-5134b125fa49" containerName="keystone-bootstrap" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.911928 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d306f46-ea22-4b07-a18c-5134b125fa49" containerName="keystone-bootstrap" Jan 12 13:22:03 crc kubenswrapper[4580]: E0112 13:22:03.911969 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerName="init" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.911977 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerName="init" Jan 12 13:22:03 crc kubenswrapper[4580]: E0112 13:22:03.912012 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerName="dnsmasq-dns" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.912019 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerName="dnsmasq-dns" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.912241 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d306f46-ea22-4b07-a18c-5134b125fa49" containerName="keystone-bootstrap" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.912272 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9391a2-339e-4eed-84df-164e7eae3e0c" containerName="dnsmasq-dns" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.913119 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.917528 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.917618 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.917539 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.917863 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.918149 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-64jgc" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.918262 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.929506 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-696c64b546-cw888"] Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.954555 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-internal-tls-certs\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.954598 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-public-tls-certs\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.954623 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-config-data\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.954670 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w62cl\" (UniqueName: \"kubernetes.io/projected/9fad586a-c41d-44da-8144-75dcb27fe7e9-kube-api-access-w62cl\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.954693 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-combined-ca-bundle\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.954725 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-scripts\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.954763 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-credential-keys\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:03 crc kubenswrapper[4580]: I0112 13:22:03.954793 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-fernet-keys\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.061728 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w62cl\" (UniqueName: \"kubernetes.io/projected/9fad586a-c41d-44da-8144-75dcb27fe7e9-kube-api-access-w62cl\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.061788 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-combined-ca-bundle\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.061845 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-scripts\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.061905 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-credential-keys\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.061961 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-fernet-keys\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.062038 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-internal-tls-certs\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.062067 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-public-tls-certs\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.062095 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-config-data\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.072578 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-combined-ca-bundle\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.075542 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-config-data\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.077698 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-public-tls-certs\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.077737 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-credential-keys\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.077914 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-blrls" event={"ID":"3cbf5c7d-9220-43a8-9015-1c52d0c3855f","Type":"ContainerStarted","Data":"698cdb1aaa8b6c445236171e6a6b8117e4da8fae97df9c89d16885470d435ad6"} Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.078665 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-scripts\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.080597 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-internal-tls-certs\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.089825 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w62cl\" (UniqueName: \"kubernetes.io/projected/9fad586a-c41d-44da-8144-75dcb27fe7e9-kube-api-access-w62cl\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.090138 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cj96v" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerName="registry-server" containerID="cri-o://0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6" gracePeriod=2 Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.090435 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mmlfs" event={"ID":"702612c1-966a-4293-b0dc-05901a325794","Type":"ContainerStarted","Data":"f22816032ca3f146f987bb95a8cf9b28010210c2f7936eb1ca2f8f7a56a04d49"} Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.092575 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.092759 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.100192 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fad586a-c41d-44da-8144-75dcb27fe7e9-fernet-keys\") pod \"keystone-696c64b546-cw888\" (UID: \"9fad586a-c41d-44da-8144-75dcb27fe7e9\") " pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.105118 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-blrls" podStartSLOduration=5.344484403 podStartE2EDuration="42.105090015s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="2026-01-12 13:21:26.298647851 +0000 UTC m=+885.342866541" lastFinishedPulling="2026-01-12 13:22:03.059253463 +0000 UTC m=+922.103472153" observedRunningTime="2026-01-12 13:22:04.098555044 +0000 UTC m=+923.142773734" watchObservedRunningTime="2026-01-12 13:22:04.105090015 +0000 UTC m=+923.149308705" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.121410 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-mmlfs" podStartSLOduration=5.803966602 podStartE2EDuration="42.121386446s" podCreationTimestamp="2026-01-12 13:21:22 +0000 UTC" firstStartedPulling="2026-01-12 13:21:26.296667967 +0000 UTC m=+885.340886658" lastFinishedPulling="2026-01-12 13:22:02.614087812 +0000 UTC m=+921.658306502" observedRunningTime="2026-01-12 13:22:04.116854061 +0000 UTC m=+923.161072752" watchObservedRunningTime="2026-01-12 13:22:04.121386446 +0000 UTC m=+923.165605136" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.248300 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.686604 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.782762 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-utilities\") pod \"11060a9d-34a1-4ac1-baa8-a478351504f3\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.782970 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl5p7\" (UniqueName: \"kubernetes.io/projected/11060a9d-34a1-4ac1-baa8-a478351504f3-kube-api-access-dl5p7\") pod \"11060a9d-34a1-4ac1-baa8-a478351504f3\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.783223 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-catalog-content\") pod \"11060a9d-34a1-4ac1-baa8-a478351504f3\" (UID: \"11060a9d-34a1-4ac1-baa8-a478351504f3\") " Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.784287 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-utilities" (OuterVolumeSpecName: "utilities") pod "11060a9d-34a1-4ac1-baa8-a478351504f3" (UID: "11060a9d-34a1-4ac1-baa8-a478351504f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.789296 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11060a9d-34a1-4ac1-baa8-a478351504f3-kube-api-access-dl5p7" (OuterVolumeSpecName: "kube-api-access-dl5p7") pod "11060a9d-34a1-4ac1-baa8-a478351504f3" (UID: "11060a9d-34a1-4ac1-baa8-a478351504f3"). InnerVolumeSpecName "kube-api-access-dl5p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.837383 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11060a9d-34a1-4ac1-baa8-a478351504f3" (UID: "11060a9d-34a1-4ac1-baa8-a478351504f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.872567 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-696c64b546-cw888"] Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.885251 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.885276 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl5p7\" (UniqueName: \"kubernetes.io/projected/11060a9d-34a1-4ac1-baa8-a478351504f3-kube-api-access-dl5p7\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:04 crc kubenswrapper[4580]: I0112 13:22:04.885287 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11060a9d-34a1-4ac1-baa8-a478351504f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.118193 4580 generic.go:334] "Generic (PLEG): container finished" podID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerID="0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6" exitCode=0 Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.118255 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cj96v" event={"ID":"11060a9d-34a1-4ac1-baa8-a478351504f3","Type":"ContainerDied","Data":"0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6"} Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.118285 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cj96v" event={"ID":"11060a9d-34a1-4ac1-baa8-a478351504f3","Type":"ContainerDied","Data":"d1e56f116e8efc9b7ddc72285a618f56bcd9f2b98d52ce76fcb37570d6f6f4bf"} Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.118304 4580 scope.go:117] "RemoveContainer" containerID="0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.118419 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cj96v" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.132507 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-696c64b546-cw888" event={"ID":"9fad586a-c41d-44da-8144-75dcb27fe7e9","Type":"ContainerStarted","Data":"3b6803e24c9c565bfdc96fdd178caffa85a00b85432b703176760b2d703d51bd"} Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.150005 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cj96v"] Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.152826 4580 scope.go:117] "RemoveContainer" containerID="69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.164792 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cj96v"] Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.214458 4580 scope.go:117] "RemoveContainer" containerID="4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.291560 4580 scope.go:117] "RemoveContainer" containerID="0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6" Jan 12 13:22:05 crc kubenswrapper[4580]: E0112 13:22:05.293412 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6\": container with ID starting with 0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6 not found: ID does not exist" containerID="0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.293478 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6"} err="failed to get container status \"0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6\": rpc error: code = NotFound desc = could not find container \"0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6\": container with ID starting with 0a012c3d91f574f77cf1e288e3636a328d9bc123f4e44d26392b8cacb8e4d7c6 not found: ID does not exist" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.293515 4580 scope.go:117] "RemoveContainer" containerID="69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181" Jan 12 13:22:05 crc kubenswrapper[4580]: E0112 13:22:05.296078 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181\": container with ID starting with 69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181 not found: ID does not exist" containerID="69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.296173 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181"} err="failed to get container status \"69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181\": rpc error: code = NotFound desc = could not find container \"69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181\": container with ID starting with 69a6d777caff29dbd7e357ee6c3a4b8b1c36cd5b3f27d9cc02f332c43a8f2181 not found: ID does not exist" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.296218 4580 scope.go:117] "RemoveContainer" containerID="4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9" Jan 12 13:22:05 crc kubenswrapper[4580]: E0112 13:22:05.297503 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9\": container with ID starting with 4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9 not found: ID does not exist" containerID="4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.297556 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9"} err="failed to get container status \"4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9\": rpc error: code = NotFound desc = could not find container \"4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9\": container with ID starting with 4823f599e31769370c48518b573ec11a8a0ef8f49bd0bcbc467079676a9e76c9 not found: ID does not exist" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.297694 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" path="/var/lib/kubelet/pods/11060a9d-34a1-4ac1-baa8-a478351504f3/volumes" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.300090 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9391a2-339e-4eed-84df-164e7eae3e0c" path="/var/lib/kubelet/pods/cd9391a2-339e-4eed-84df-164e7eae3e0c/volumes" Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.631949 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jtmvw"] Jan 12 13:22:05 crc kubenswrapper[4580]: I0112 13:22:05.632309 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jtmvw" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="registry-server" containerID="cri-o://84b2273d9e2a668302d2b30f68341de15e75b7c74fba0cc4eb8e482c80b94806" gracePeriod=2 Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.148734 4580 generic.go:334] "Generic (PLEG): container finished" podID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerID="84b2273d9e2a668302d2b30f68341de15e75b7c74fba0cc4eb8e482c80b94806" exitCode=0 Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.148935 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtmvw" event={"ID":"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74","Type":"ContainerDied","Data":"84b2273d9e2a668302d2b30f68341de15e75b7c74fba0cc4eb8e482c80b94806"} Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.149043 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtmvw" event={"ID":"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74","Type":"ContainerDied","Data":"a781dfbe2243eeb3d30a8ca95dcca0f210af90035333785c8d701096ee6ed374"} Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.149060 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a781dfbe2243eeb3d30a8ca95dcca0f210af90035333785c8d701096ee6ed374" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.152660 4580 generic.go:334] "Generic (PLEG): container finished" podID="3cbf5c7d-9220-43a8-9015-1c52d0c3855f" containerID="698cdb1aaa8b6c445236171e6a6b8117e4da8fae97df9c89d16885470d435ad6" exitCode=0 Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.152718 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-blrls" event={"ID":"3cbf5c7d-9220-43a8-9015-1c52d0c3855f","Type":"ContainerDied","Data":"698cdb1aaa8b6c445236171e6a6b8117e4da8fae97df9c89d16885470d435ad6"} Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.154690 4580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.154709 4580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.154726 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-696c64b546-cw888" event={"ID":"9fad586a-c41d-44da-8144-75dcb27fe7e9","Type":"ContainerStarted","Data":"c155ea1e703a7dc31f8dcb3572737005bb61824a6eae10ae508cd958d30ceb71"} Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.155849 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.187217 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.192993 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-696c64b546-cw888" podStartSLOduration=3.192976256 podStartE2EDuration="3.192976256s" podCreationTimestamp="2026-01-12 13:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:22:06.19035728 +0000 UTC m=+925.234575969" watchObservedRunningTime="2026-01-12 13:22:06.192976256 +0000 UTC m=+925.237194946" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.216254 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj2tz\" (UniqueName: \"kubernetes.io/projected/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-kube-api-access-jj2tz\") pod \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.216497 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-catalog-content\") pod \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.216524 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-utilities\") pod \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\" (UID: \"9a79b6ea-48d6-4df6-9a7e-dbfe246edc74\") " Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.218065 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-utilities" (OuterVolumeSpecName: "utilities") pod "9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" (UID: "9a79b6ea-48d6-4df6-9a7e-dbfe246edc74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.219620 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.242209 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-kube-api-access-jj2tz" (OuterVolumeSpecName: "kube-api-access-jj2tz") pod "9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" (UID: "9a79b6ea-48d6-4df6-9a7e-dbfe246edc74"). InnerVolumeSpecName "kube-api-access-jj2tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.261178 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" (UID: "9a79b6ea-48d6-4df6-9a7e-dbfe246edc74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.327056 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.327089 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj2tz\" (UniqueName: \"kubernetes.io/projected/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74-kube-api-access-jj2tz\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.392628 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 12 13:22:06 crc kubenswrapper[4580]: I0112 13:22:06.797279 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.166094 4580 generic.go:334] "Generic (PLEG): container finished" podID="702612c1-966a-4293-b0dc-05901a325794" containerID="f22816032ca3f146f987bb95a8cf9b28010210c2f7936eb1ca2f8f7a56a04d49" exitCode=0 Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.166151 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mmlfs" event={"ID":"702612c1-966a-4293-b0dc-05901a325794","Type":"ContainerDied","Data":"f22816032ca3f146f987bb95a8cf9b28010210c2f7936eb1ca2f8f7a56a04d49"} Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.166272 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtmvw" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.237471 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jtmvw"] Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.245370 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jtmvw"] Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.299517 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" path="/var/lib/kubelet/pods/9a79b6ea-48d6-4df6-9a7e-dbfe246edc74/volumes" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.580682 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-blrls" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.654033 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-db-sync-config-data\") pod \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.654300 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-combined-ca-bundle\") pod \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.654689 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzdbk\" (UniqueName: \"kubernetes.io/projected/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-kube-api-access-fzdbk\") pod \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\" (UID: \"3cbf5c7d-9220-43a8-9015-1c52d0c3855f\") " Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.664201 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-kube-api-access-fzdbk" (OuterVolumeSpecName: "kube-api-access-fzdbk") pod "3cbf5c7d-9220-43a8-9015-1c52d0c3855f" (UID: "3cbf5c7d-9220-43a8-9015-1c52d0c3855f"). InnerVolumeSpecName "kube-api-access-fzdbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.675274 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3cbf5c7d-9220-43a8-9015-1c52d0c3855f" (UID: "3cbf5c7d-9220-43a8-9015-1c52d0c3855f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.723211 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cbf5c7d-9220-43a8-9015-1c52d0c3855f" (UID: "3cbf5c7d-9220-43a8-9015-1c52d0c3855f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.759270 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzdbk\" (UniqueName: \"kubernetes.io/projected/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-kube-api-access-fzdbk\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.759305 4580 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:07 crc kubenswrapper[4580]: I0112 13:22:07.759429 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbf5c7d-9220-43a8-9015-1c52d0c3855f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.187592 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-blrls" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.189419 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-blrls" event={"ID":"3cbf5c7d-9220-43a8-9015-1c52d0c3855f","Type":"ContainerDied","Data":"63323b76e030c14c9c93459ca74872a9333ca94c3200068c7c987e21badd1692"} Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.189449 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63323b76e030c14c9c93459ca74872a9333ca94c3200068c7c987e21badd1692" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336268 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-655fc5cf45-jcmxp"] Jan 12 13:22:08 crc kubenswrapper[4580]: E0112 13:22:08.336611 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerName="extract-content" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336630 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerName="extract-content" Jan 12 13:22:08 crc kubenswrapper[4580]: E0112 13:22:08.336644 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="extract-content" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336652 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="extract-content" Jan 12 13:22:08 crc kubenswrapper[4580]: E0112 13:22:08.336666 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="extract-utilities" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336673 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="extract-utilities" Jan 12 13:22:08 crc kubenswrapper[4580]: E0112 13:22:08.336686 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="registry-server" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336692 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="registry-server" Jan 12 13:22:08 crc kubenswrapper[4580]: E0112 13:22:08.336699 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerName="extract-utilities" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336705 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerName="extract-utilities" Jan 12 13:22:08 crc kubenswrapper[4580]: E0112 13:22:08.336720 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerName="registry-server" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336726 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerName="registry-server" Jan 12 13:22:08 crc kubenswrapper[4580]: E0112 13:22:08.336734 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbf5c7d-9220-43a8-9015-1c52d0c3855f" containerName="barbican-db-sync" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336739 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbf5c7d-9220-43a8-9015-1c52d0c3855f" containerName="barbican-db-sync" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336878 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="11060a9d-34a1-4ac1-baa8-a478351504f3" containerName="registry-server" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336888 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cbf5c7d-9220-43a8-9015-1c52d0c3855f" containerName="barbican-db-sync" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.336902 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a79b6ea-48d6-4df6-9a7e-dbfe246edc74" containerName="registry-server" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.337680 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.341146 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.341325 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nnq6f" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.353695 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-655fc5cf45-jcmxp"] Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.358543 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.375177 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-config-data\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.375268 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfq8d\" (UniqueName: \"kubernetes.io/projected/eed9373c-ecc9-4510-bb6b-9171b70a9088-kube-api-access-qfq8d\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.375355 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-combined-ca-bundle\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.375539 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-config-data-custom\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.375566 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed9373c-ecc9-4510-bb6b-9171b70a9088-logs\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.383487 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5"] Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.384874 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.394926 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.433660 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5"] Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.477873 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfq8d\" (UniqueName: \"kubernetes.io/projected/eed9373c-ecc9-4510-bb6b-9171b70a9088-kube-api-access-qfq8d\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.477966 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c62tl\" (UniqueName: \"kubernetes.io/projected/1b072324-9c35-458a-8d4a-1759b9ed2883-kube-api-access-c62tl\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.478030 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-combined-ca-bundle\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.478073 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-config-data\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.478126 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-config-data-custom\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.478177 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-combined-ca-bundle\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.478237 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-config-data-custom\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.478268 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed9373c-ecc9-4510-bb6b-9171b70a9088-logs\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.478297 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b072324-9c35-458a-8d4a-1759b9ed2883-logs\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.478456 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-config-data\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.481402 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed9373c-ecc9-4510-bb6b-9171b70a9088-logs\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.486146 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q7ttn"] Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.487510 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.488416 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-combined-ca-bundle\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.490422 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-config-data\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.517891 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eed9373c-ecc9-4510-bb6b-9171b70a9088-config-data-custom\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.520856 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfq8d\" (UniqueName: \"kubernetes.io/projected/eed9373c-ecc9-4510-bb6b-9171b70a9088-kube-api-access-qfq8d\") pod \"barbican-worker-655fc5cf45-jcmxp\" (UID: \"eed9373c-ecc9-4510-bb6b-9171b70a9088\") " pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.528538 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q7ttn"] Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.600786 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-config-data-custom\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.600897 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-combined-ca-bundle\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601014 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b072324-9c35-458a-8d4a-1759b9ed2883-logs\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601208 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601246 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-config\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601299 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601410 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601459 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c62tl\" (UniqueName: \"kubernetes.io/projected/1b072324-9c35-458a-8d4a-1759b9ed2883-kube-api-access-c62tl\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601493 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xxhr\" (UniqueName: \"kubernetes.io/projected/d79305a3-d758-40cb-826b-a6c7aa65dbea-kube-api-access-8xxhr\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601576 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-config-data\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.601621 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.606962 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b072324-9c35-458a-8d4a-1759b9ed2883-logs\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.608709 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-579c8f556d-z7gld"] Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.609661 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-config-data\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.610469 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.610785 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-combined-ca-bundle\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.617204 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b072324-9c35-458a-8d4a-1759b9ed2883-config-data-custom\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.619417 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-579c8f556d-z7gld"] Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.620189 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c62tl\" (UniqueName: \"kubernetes.io/projected/1b072324-9c35-458a-8d4a-1759b9ed2883-kube-api-access-c62tl\") pod \"barbican-keystone-listener-5bfdbc7dc6-bk5g5\" (UID: \"1b072324-9c35-458a-8d4a-1759b9ed2883\") " pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.632048 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.659607 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-655fc5cf45-jcmxp" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703466 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703543 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703576 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xxhr\" (UniqueName: \"kubernetes.io/projected/d79305a3-d758-40cb-826b-a6c7aa65dbea-kube-api-access-8xxhr\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703618 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcddf9d2-2130-4f76-9318-373ba59d2f70-logs\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703649 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703685 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkhf\" (UniqueName: \"kubernetes.io/projected/fcddf9d2-2130-4f76-9318-373ba59d2f70-kube-api-access-jwkhf\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703770 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-combined-ca-bundle\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703801 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703840 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703867 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-config\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.703890 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data-custom\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.704767 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.705015 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.705288 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.706269 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.707039 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.707794 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-config\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.722654 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xxhr\" (UniqueName: \"kubernetes.io/projected/d79305a3-d758-40cb-826b-a6c7aa65dbea-kube-api-access-8xxhr\") pod \"dnsmasq-dns-66cdd4b5b5-q7ttn\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.793429 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.816133 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-combined-ca-bundle\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.816195 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.816232 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data-custom\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.816298 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcddf9d2-2130-4f76-9318-373ba59d2f70-logs\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.816333 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwkhf\" (UniqueName: \"kubernetes.io/projected/fcddf9d2-2130-4f76-9318-373ba59d2f70-kube-api-access-jwkhf\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.818395 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcddf9d2-2130-4f76-9318-373ba59d2f70-logs\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.823975 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-combined-ca-bundle\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.830682 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data-custom\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.831911 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.859166 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwkhf\" (UniqueName: \"kubernetes.io/projected/fcddf9d2-2130-4f76-9318-373ba59d2f70-kube-api-access-jwkhf\") pod \"barbican-api-579c8f556d-z7gld\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.894796 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.922870 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-scripts\") pod \"702612c1-966a-4293-b0dc-05901a325794\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.923161 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-combined-ca-bundle\") pod \"702612c1-966a-4293-b0dc-05901a325794\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.923443 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zx8j\" (UniqueName: \"kubernetes.io/projected/702612c1-966a-4293-b0dc-05901a325794-kube-api-access-8zx8j\") pod \"702612c1-966a-4293-b0dc-05901a325794\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.923560 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-config-data\") pod \"702612c1-966a-4293-b0dc-05901a325794\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.923601 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/702612c1-966a-4293-b0dc-05901a325794-etc-machine-id\") pod \"702612c1-966a-4293-b0dc-05901a325794\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.923804 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-db-sync-config-data\") pod \"702612c1-966a-4293-b0dc-05901a325794\" (UID: \"702612c1-966a-4293-b0dc-05901a325794\") " Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.929326 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/702612c1-966a-4293-b0dc-05901a325794-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "702612c1-966a-4293-b0dc-05901a325794" (UID: "702612c1-966a-4293-b0dc-05901a325794"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.971446 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "702612c1-966a-4293-b0dc-05901a325794" (UID: "702612c1-966a-4293-b0dc-05901a325794"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.971547 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-scripts" (OuterVolumeSpecName: "scripts") pod "702612c1-966a-4293-b0dc-05901a325794" (UID: "702612c1-966a-4293-b0dc-05901a325794"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.974987 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "702612c1-966a-4293-b0dc-05901a325794" (UID: "702612c1-966a-4293-b0dc-05901a325794"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.983266 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/702612c1-966a-4293-b0dc-05901a325794-kube-api-access-8zx8j" (OuterVolumeSpecName: "kube-api-access-8zx8j") pod "702612c1-966a-4293-b0dc-05901a325794" (UID: "702612c1-966a-4293-b0dc-05901a325794"). InnerVolumeSpecName "kube-api-access-8zx8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:08 crc kubenswrapper[4580]: I0112 13:22:08.996075 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.029917 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.029950 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.029970 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zx8j\" (UniqueName: \"kubernetes.io/projected/702612c1-966a-4293-b0dc-05901a325794-kube-api-access-8zx8j\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.029982 4580 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/702612c1-966a-4293-b0dc-05901a325794-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.029991 4580 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.064162 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-config-data" (OuterVolumeSpecName: "config-data") pod "702612c1-966a-4293-b0dc-05901a325794" (UID: "702612c1-966a-4293-b0dc-05901a325794"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.133586 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702612c1-966a-4293-b0dc-05901a325794-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.209343 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mmlfs" event={"ID":"702612c1-966a-4293-b0dc-05901a325794","Type":"ContainerDied","Data":"91816bd55f430375591c93aba4ab093551d724a1ef11aad768b19039a8662d4b"} Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.209386 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91816bd55f430375591c93aba4ab093551d724a1ef11aad768b19039a8662d4b" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.209470 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mmlfs" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.379311 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-655fc5cf45-jcmxp"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.483713 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:09 crc kubenswrapper[4580]: E0112 13:22:09.484266 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702612c1-966a-4293-b0dc-05901a325794" containerName="cinder-db-sync" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.484287 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="702612c1-966a-4293-b0dc-05901a325794" containerName="cinder-db-sync" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.484507 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="702612c1-966a-4293-b0dc-05901a325794" containerName="cinder-db-sync" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.485595 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.498148 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.506784 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.513096 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.515040 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.515690 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tktfx" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.515830 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.541407 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b1540ce-a351-4090-bf54-e253994d9020-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.541460 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-scripts\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.541593 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.541705 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.541779 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.541854 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2qhw\" (UniqueName: \"kubernetes.io/projected/9b1540ce-a351-4090-bf54-e253994d9020-kube-api-access-t2qhw\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.555221 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q7ttn"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.560638 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q7ttn"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.590802 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-qwbpt"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.597642 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.605050 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-qwbpt"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.632684 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.634872 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.638404 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643478 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b1540ce-a351-4090-bf54-e253994d9020-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643530 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643548 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-config\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643570 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-scripts\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643609 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnz7f\" (UniqueName: \"kubernetes.io/projected/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-kube-api-access-wnz7f\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643631 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643666 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643694 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643712 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643745 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2qhw\" (UniqueName: \"kubernetes.io/projected/9b1540ce-a351-4090-bf54-e253994d9020-kube-api-access-t2qhw\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643760 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643782 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.643859 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b1540ce-a351-4090-bf54-e253994d9020-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.649084 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.652420 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.652695 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.661244 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-scripts\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.668392 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2qhw\" (UniqueName: \"kubernetes.io/projected/9b1540ce-a351-4090-bf54-e253994d9020-kube-api-access-t2qhw\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.672376 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.745904 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.745962 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.745997 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746036 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746057 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44f5eb50-8e4a-4ed3-959c-ba730be9863f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746078 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746144 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data-custom\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746162 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f5eb50-8e4a-4ed3-959c-ba730be9863f-logs\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746197 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746213 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvjt\" (UniqueName: \"kubernetes.io/projected/44f5eb50-8e4a-4ed3-959c-ba730be9863f-kube-api-access-nkvjt\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746228 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-config\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746247 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-scripts\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.746279 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnz7f\" (UniqueName: \"kubernetes.io/projected/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-kube-api-access-wnz7f\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.747390 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.747894 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.748438 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.748973 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.749507 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-config\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.763486 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnz7f\" (UniqueName: \"kubernetes.io/projected/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-kube-api-access-wnz7f\") pod \"dnsmasq-dns-75dbb546bf-qwbpt\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.848404 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.848495 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.848566 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44f5eb50-8e4a-4ed3-959c-ba730be9863f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.848687 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data-custom\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.848711 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f5eb50-8e4a-4ed3-959c-ba730be9863f-logs\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.848716 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44f5eb50-8e4a-4ed3-959c-ba730be9863f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.849010 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvjt\" (UniqueName: \"kubernetes.io/projected/44f5eb50-8e4a-4ed3-959c-ba730be9863f-kube-api-access-nkvjt\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.849467 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-scripts\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.849343 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f5eb50-8e4a-4ed3-959c-ba730be9863f-logs\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.853589 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data-custom\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.854310 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.856571 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.860461 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-scripts\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.865492 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvjt\" (UniqueName: \"kubernetes.io/projected/44f5eb50-8e4a-4ed3-959c-ba730be9863f-kube-api-access-nkvjt\") pod \"cinder-api-0\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " pod="openstack/cinder-api-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.877185 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 12 13:22:09 crc kubenswrapper[4580]: I0112 13:22:09.936192 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:10 crc kubenswrapper[4580]: I0112 13:22:10.018545 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 12 13:22:11 crc kubenswrapper[4580]: I0112 13:22:11.553656 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54b765ff94-66rkz" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 12 13:22:11 crc kubenswrapper[4580]: I0112 13:22:11.640557 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8699b457dd-z2fkt" podUID="92d059e4-ff2b-4ecc-ae14-6367d54e720f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 12 13:22:12 crc kubenswrapper[4580]: I0112 13:22:12.073249 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:22:12 crc kubenswrapper[4580]: I0112 13:22:12.073627 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:22:12 crc kubenswrapper[4580]: I0112 13:22:12.160926 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:12 crc kubenswrapper[4580]: W0112 13:22:12.878274 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd79305a3_d758_40cb_826b_a6c7aa65dbea.slice/crio-ccf7a1fa026fdd19b2eeb48f25d35354765d5ecd0e07f0ecb15691cd6ecdf7a8 WatchSource:0}: Error finding container ccf7a1fa026fdd19b2eeb48f25d35354765d5ecd0e07f0ecb15691cd6ecdf7a8: Status 404 returned error can't find the container with id ccf7a1fa026fdd19b2eeb48f25d35354765d5ecd0e07f0ecb15691cd6ecdf7a8 Jan 12 13:22:13 crc kubenswrapper[4580]: I0112 13:22:13.119013 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zfsd9" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="registry-server" probeResult="failure" output=< Jan 12 13:22:13 crc kubenswrapper[4580]: timeout: failed to connect service ":50051" within 1s Jan 12 13:22:13 crc kubenswrapper[4580]: > Jan 12 13:22:13 crc kubenswrapper[4580]: I0112 13:22:13.261464 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" event={"ID":"d79305a3-d758-40cb-826b-a6c7aa65dbea","Type":"ContainerStarted","Data":"ccf7a1fa026fdd19b2eeb48f25d35354765d5ecd0e07f0ecb15691cd6ecdf7a8"} Jan 12 13:22:13 crc kubenswrapper[4580]: W0112 13:22:13.380264 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeed9373c_ecc9_4510_bb6b_9171b70a9088.slice/crio-1ef11b43f7edabde8b3f81f6384ca2061cf8865a08c11be99a6a23ea490b29e1 WatchSource:0}: Error finding container 1ef11b43f7edabde8b3f81f6384ca2061cf8865a08c11be99a6a23ea490b29e1: Status 404 returned error can't find the container with id 1ef11b43f7edabde8b3f81f6384ca2061cf8865a08c11be99a6a23ea490b29e1 Jan 12 13:22:13 crc kubenswrapper[4580]: I0112 13:22:13.744890 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-579c8f556d-z7gld"] Jan 12 13:22:14 crc kubenswrapper[4580]: I0112 13:22:14.282325 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-579c8f556d-z7gld" event={"ID":"fcddf9d2-2130-4f76-9318-373ba59d2f70","Type":"ContainerStarted","Data":"4dccd14653cc6bef47059eb9b6fed2cbb00668c8bb4a73c415e3e325b843ff96"} Jan 12 13:22:14 crc kubenswrapper[4580]: I0112 13:22:14.290794 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" event={"ID":"1b072324-9c35-458a-8d4a-1759b9ed2883","Type":"ContainerStarted","Data":"0c075f2a3b8512ee85fd578695da608fa0d0cb000a92d468138907b96e737ecc"} Jan 12 13:22:14 crc kubenswrapper[4580]: I0112 13:22:14.295625 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-655fc5cf45-jcmxp" event={"ID":"eed9373c-ecc9-4510-bb6b-9171b70a9088","Type":"ContainerStarted","Data":"1ef11b43f7edabde8b3f81f6384ca2061cf8865a08c11be99a6a23ea490b29e1"} Jan 12 13:22:14 crc kubenswrapper[4580]: E0112 13:22:14.506295 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" Jan 12 13:22:14 crc kubenswrapper[4580]: I0112 13:22:14.670818 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-qwbpt"] Jan 12 13:22:14 crc kubenswrapper[4580]: W0112 13:22:14.731614 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b8c0c88_3d61_4cd1_9b9e_df2ca83717f1.slice/crio-2dd23b7b45f1fe5070621c7d21c1c1aacd97ba09856cfe301dba35ce465b9f39 WatchSource:0}: Error finding container 2dd23b7b45f1fe5070621c7d21c1c1aacd97ba09856cfe301dba35ce465b9f39: Status 404 returned error can't find the container with id 2dd23b7b45f1fe5070621c7d21c1c1aacd97ba09856cfe301dba35ce465b9f39 Jan 12 13:22:14 crc kubenswrapper[4580]: I0112 13:22:14.790419 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:14 crc kubenswrapper[4580]: W0112 13:22:14.817714 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b1540ce_a351_4090_bf54_e253994d9020.slice/crio-8ee8e30dad2538326d486a689e38ca38aea5847a0a8277a2500719f8c0b4f4c4 WatchSource:0}: Error finding container 8ee8e30dad2538326d486a689e38ca38aea5847a0a8277a2500719f8c0b4f4c4: Status 404 returned error can't find the container with id 8ee8e30dad2538326d486a689e38ca38aea5847a0a8277a2500719f8c0b4f4c4 Jan 12 13:22:14 crc kubenswrapper[4580]: I0112 13:22:14.949993 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.294925 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75699d8f8b-jqxcw"] Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.297122 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.299348 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.299364 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.305175 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75699d8f8b-jqxcw"] Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.316994 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84c1ab4b-8921-4f4a-88dd-adf6e224d62c","Type":"ContainerStarted","Data":"b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7"} Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.317139 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.317117 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="ceilometer-notification-agent" containerID="cri-o://4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0" gracePeriod=30 Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.317184 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="proxy-httpd" containerID="cri-o://b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7" gracePeriod=30 Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.317213 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="sg-core" containerID="cri-o://b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa" gracePeriod=30 Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.339748 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-579c8f556d-z7gld" event={"ID":"fcddf9d2-2130-4f76-9318-373ba59d2f70","Type":"ContainerStarted","Data":"97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607"} Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.339794 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-579c8f556d-z7gld" event={"ID":"fcddf9d2-2130-4f76-9318-373ba59d2f70","Type":"ContainerStarted","Data":"4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099"} Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.340731 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.340763 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.345398 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" event={"ID":"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1","Type":"ContainerStarted","Data":"2dd23b7b45f1fe5070621c7d21c1c1aacd97ba09856cfe301dba35ce465b9f39"} Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.346626 4580 generic.go:334] "Generic (PLEG): container finished" podID="d79305a3-d758-40cb-826b-a6c7aa65dbea" containerID="1564e52a1ab21329873e5401abea0c18609f755bdad7b31b1321383776eddbe8" exitCode=0 Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.346681 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" event={"ID":"d79305a3-d758-40cb-826b-a6c7aa65dbea","Type":"ContainerDied","Data":"1564e52a1ab21329873e5401abea0c18609f755bdad7b31b1321383776eddbe8"} Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.351295 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b1540ce-a351-4090-bf54-e253994d9020","Type":"ContainerStarted","Data":"8ee8e30dad2538326d486a689e38ca38aea5847a0a8277a2500719f8c0b4f4c4"} Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.360580 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-579c8f556d-z7gld" podStartSLOduration=7.360565681 podStartE2EDuration="7.360565681s" podCreationTimestamp="2026-01-12 13:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:22:15.355613807 +0000 UTC m=+934.399832497" watchObservedRunningTime="2026-01-12 13:22:15.360565681 +0000 UTC m=+934.404784371" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.385883 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-public-tls-certs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.385955 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-config-data-custom\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.386002 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ce4c4-5517-412c-b3c4-aafc83db85dc-logs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.386025 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-internal-tls-certs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.386060 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-config-data\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.386453 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5znfn\" (UniqueName: \"kubernetes.io/projected/722ce4c4-5517-412c-b3c4-aafc83db85dc-kube-api-access-5znfn\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.386486 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-combined-ca-bundle\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.487230 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5znfn\" (UniqueName: \"kubernetes.io/projected/722ce4c4-5517-412c-b3c4-aafc83db85dc-kube-api-access-5znfn\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.487276 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-combined-ca-bundle\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.487299 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-public-tls-certs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.487332 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-config-data-custom\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.487350 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ce4c4-5517-412c-b3c4-aafc83db85dc-logs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.487369 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-internal-tls-certs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.487397 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-config-data\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.488471 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ce4c4-5517-412c-b3c4-aafc83db85dc-logs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.493464 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-public-tls-certs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.504062 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-config-data-custom\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.504498 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-config-data\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.504668 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-combined-ca-bundle\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.505864 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ce4c4-5517-412c-b3c4-aafc83db85dc-internal-tls-certs\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.506177 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5znfn\" (UniqueName: \"kubernetes.io/projected/722ce4c4-5517-412c-b3c4-aafc83db85dc-kube-api-access-5znfn\") pod \"barbican-api-75699d8f8b-jqxcw\" (UID: \"722ce4c4-5517-412c-b3c4-aafc83db85dc\") " pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.622955 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.727021 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.792312 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-sb\") pod \"d79305a3-d758-40cb-826b-a6c7aa65dbea\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.792400 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-swift-storage-0\") pod \"d79305a3-d758-40cb-826b-a6c7aa65dbea\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.792434 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xxhr\" (UniqueName: \"kubernetes.io/projected/d79305a3-d758-40cb-826b-a6c7aa65dbea-kube-api-access-8xxhr\") pod \"d79305a3-d758-40cb-826b-a6c7aa65dbea\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.792463 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-config\") pod \"d79305a3-d758-40cb-826b-a6c7aa65dbea\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.792496 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-nb\") pod \"d79305a3-d758-40cb-826b-a6c7aa65dbea\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.792651 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-svc\") pod \"d79305a3-d758-40cb-826b-a6c7aa65dbea\" (UID: \"d79305a3-d758-40cb-826b-a6c7aa65dbea\") " Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.800821 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79305a3-d758-40cb-826b-a6c7aa65dbea-kube-api-access-8xxhr" (OuterVolumeSpecName: "kube-api-access-8xxhr") pod "d79305a3-d758-40cb-826b-a6c7aa65dbea" (UID: "d79305a3-d758-40cb-826b-a6c7aa65dbea"). InnerVolumeSpecName "kube-api-access-8xxhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.821522 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d79305a3-d758-40cb-826b-a6c7aa65dbea" (UID: "d79305a3-d758-40cb-826b-a6c7aa65dbea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.831214 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d79305a3-d758-40cb-826b-a6c7aa65dbea" (UID: "d79305a3-d758-40cb-826b-a6c7aa65dbea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.844083 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d79305a3-d758-40cb-826b-a6c7aa65dbea" (UID: "d79305a3-d758-40cb-826b-a6c7aa65dbea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.845609 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d79305a3-d758-40cb-826b-a6c7aa65dbea" (UID: "d79305a3-d758-40cb-826b-a6c7aa65dbea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.855696 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-config" (OuterVolumeSpecName: "config") pod "d79305a3-d758-40cb-826b-a6c7aa65dbea" (UID: "d79305a3-d758-40cb-826b-a6c7aa65dbea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.894645 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.894675 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.894686 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.894695 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xxhr\" (UniqueName: \"kubernetes.io/projected/d79305a3-d758-40cb-826b-a6c7aa65dbea-kube-api-access-8xxhr\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.894704 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:15 crc kubenswrapper[4580]: I0112 13:22:15.894714 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d79305a3-d758-40cb-826b-a6c7aa65dbea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.164252 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75699d8f8b-jqxcw"] Jan 12 13:22:16 crc kubenswrapper[4580]: W0112 13:22:16.179558 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722ce4c4_5517_412c_b3c4_aafc83db85dc.slice/crio-88358b480d7ef23ade48d035982c6fc99220d2f7e08dc40bc51ed5f11b45a862 WatchSource:0}: Error finding container 88358b480d7ef23ade48d035982c6fc99220d2f7e08dc40bc51ed5f11b45a862: Status 404 returned error can't find the container with id 88358b480d7ef23ade48d035982c6fc99220d2f7e08dc40bc51ed5f11b45a862 Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.363245 4580 generic.go:334] "Generic (PLEG): container finished" podID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerID="b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7" exitCode=0 Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.363280 4580 generic.go:334] "Generic (PLEG): container finished" podID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerID="b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa" exitCode=2 Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.363311 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84c1ab4b-8921-4f4a-88dd-adf6e224d62c","Type":"ContainerDied","Data":"b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7"} Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.363353 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84c1ab4b-8921-4f4a-88dd-adf6e224d62c","Type":"ContainerDied","Data":"b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa"} Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.364605 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44f5eb50-8e4a-4ed3-959c-ba730be9863f","Type":"ContainerStarted","Data":"78dac73694649c7a22199d93c9cd253b8b626b142048e15730babb3f7b8e43ac"} Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.365768 4580 generic.go:334] "Generic (PLEG): container finished" podID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" containerID="c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04" exitCode=0 Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.365813 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" event={"ID":"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1","Type":"ContainerDied","Data":"c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04"} Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.367660 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" event={"ID":"d79305a3-d758-40cb-826b-a6c7aa65dbea","Type":"ContainerDied","Data":"ccf7a1fa026fdd19b2eeb48f25d35354765d5ecd0e07f0ecb15691cd6ecdf7a8"} Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.367687 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-q7ttn" Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.367709 4580 scope.go:117] "RemoveContainer" containerID="1564e52a1ab21329873e5401abea0c18609f755bdad7b31b1321383776eddbe8" Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.372276 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" event={"ID":"1b072324-9c35-458a-8d4a-1759b9ed2883","Type":"ContainerStarted","Data":"dee1b197c7835d4ff92ab52aef29f45e48f3c0142f73393afbe64bf57f479144"} Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.373263 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75699d8f8b-jqxcw" event={"ID":"722ce4c4-5517-412c-b3c4-aafc83db85dc","Type":"ContainerStarted","Data":"88358b480d7ef23ade48d035982c6fc99220d2f7e08dc40bc51ed5f11b45a862"} Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.374655 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-655fc5cf45-jcmxp" event={"ID":"eed9373c-ecc9-4510-bb6b-9171b70a9088","Type":"ContainerStarted","Data":"82c7777ce89fc2213d97aa1261da95461f0fa6c9c2de1a957c32760d309083ff"} Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.439135 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q7ttn"] Jan 12 13:22:16 crc kubenswrapper[4580]: I0112 13:22:16.444331 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-q7ttn"] Jan 12 13:22:17 crc kubenswrapper[4580]: I0112 13:22:17.293616 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d79305a3-d758-40cb-826b-a6c7aa65dbea" path="/var/lib/kubelet/pods/d79305a3-d758-40cb-826b-a6c7aa65dbea/volumes" Jan 12 13:22:17 crc kubenswrapper[4580]: I0112 13:22:17.422090 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b1540ce-a351-4090-bf54-e253994d9020","Type":"ContainerStarted","Data":"6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e"} Jan 12 13:22:17 crc kubenswrapper[4580]: I0112 13:22:17.424586 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44f5eb50-8e4a-4ed3-959c-ba730be9863f","Type":"ContainerStarted","Data":"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94"} Jan 12 13:22:17 crc kubenswrapper[4580]: I0112 13:22:17.433672 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" podStartSLOduration=7.077143102 podStartE2EDuration="9.433654351s" podCreationTimestamp="2026-01-12 13:22:08 +0000 UTC" firstStartedPulling="2026-01-12 13:22:13.392938277 +0000 UTC m=+932.437156967" lastFinishedPulling="2026-01-12 13:22:15.749449526 +0000 UTC m=+934.793668216" observedRunningTime="2026-01-12 13:22:17.431957198 +0000 UTC m=+936.476175889" watchObservedRunningTime="2026-01-12 13:22:17.433654351 +0000 UTC m=+936.477873041" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.259462 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.269498 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.435517 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-655fc5cf45-jcmxp" event={"ID":"eed9373c-ecc9-4510-bb6b-9171b70a9088","Type":"ContainerStarted","Data":"edd76b239ad785486f5a944589022541b2a4c0b991b1e9097c6bf583bbe7e614"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.439092 4580 generic.go:334] "Generic (PLEG): container finished" podID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerID="4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0" exitCode=0 Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.439230 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.439535 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84c1ab4b-8921-4f4a-88dd-adf6e224d62c","Type":"ContainerDied","Data":"4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.439563 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84c1ab4b-8921-4f4a-88dd-adf6e224d62c","Type":"ContainerDied","Data":"ea4fd92280dbf60298bec5450dfed75dd984bde99b64d228e1e725d11a02e6fa"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.439603 4580 scope.go:117] "RemoveContainer" containerID="b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.449164 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44f5eb50-8e4a-4ed3-959c-ba730be9863f","Type":"ContainerStarted","Data":"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.449579 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerName="cinder-api-log" containerID="cri-o://f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94" gracePeriod=30 Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.450071 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.450157 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerName="cinder-api" containerID="cri-o://7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e" gracePeriod=30 Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.461163 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" event={"ID":"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1","Type":"ContainerStarted","Data":"2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.462061 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.462414 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-log-httpd\") pod \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.462448 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-scripts\") pod \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.462514 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-sg-core-conf-yaml\") pod \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.462603 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmcxn\" (UniqueName: \"kubernetes.io/projected/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-kube-api-access-jmcxn\") pod \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.462703 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-combined-ca-bundle\") pod \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.462739 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-run-httpd\") pod \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.462764 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-config-data\") pod \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\" (UID: \"84c1ab4b-8921-4f4a-88dd-adf6e224d62c\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.464795 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "84c1ab4b-8921-4f4a-88dd-adf6e224d62c" (UID: "84c1ab4b-8921-4f4a-88dd-adf6e224d62c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.465290 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "84c1ab4b-8921-4f4a-88dd-adf6e224d62c" (UID: "84c1ab4b-8921-4f4a-88dd-adf6e224d62c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.471357 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bfdbc7dc6-bk5g5" event={"ID":"1b072324-9c35-458a-8d4a-1759b9ed2883","Type":"ContainerStarted","Data":"ed04b22a29186e760a2d85a657f94003c8894bdb5ac5f2ff2437f2e47cdd607d"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.480228 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-655fc5cf45-jcmxp" podStartSLOduration=8.120672083 podStartE2EDuration="10.480215235s" podCreationTimestamp="2026-01-12 13:22:08 +0000 UTC" firstStartedPulling="2026-01-12 13:22:13.389688955 +0000 UTC m=+932.433907645" lastFinishedPulling="2026-01-12 13:22:15.749232108 +0000 UTC m=+934.793450797" observedRunningTime="2026-01-12 13:22:18.450806171 +0000 UTC m=+937.495024861" watchObservedRunningTime="2026-01-12 13:22:18.480215235 +0000 UTC m=+937.524433925" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.480297 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-kube-api-access-jmcxn" (OuterVolumeSpecName: "kube-api-access-jmcxn") pod "84c1ab4b-8921-4f4a-88dd-adf6e224d62c" (UID: "84c1ab4b-8921-4f4a-88dd-adf6e224d62c"). InnerVolumeSpecName "kube-api-access-jmcxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.481450 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-scripts" (OuterVolumeSpecName: "scripts") pod "84c1ab4b-8921-4f4a-88dd-adf6e224d62c" (UID: "84c1ab4b-8921-4f4a-88dd-adf6e224d62c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.482641 4580 scope.go:117] "RemoveContainer" containerID="b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.484152 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b1540ce-a351-4090-bf54-e253994d9020","Type":"ContainerStarted","Data":"a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.494737 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.494714535 podStartE2EDuration="9.494714535s" podCreationTimestamp="2026-01-12 13:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:22:18.471473473 +0000 UTC m=+937.515692163" watchObservedRunningTime="2026-01-12 13:22:18.494714535 +0000 UTC m=+937.538933225" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.505466 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" podStartSLOduration=9.505451971 podStartE2EDuration="9.505451971s" podCreationTimestamp="2026-01-12 13:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:22:18.4972687 +0000 UTC m=+937.541487389" watchObservedRunningTime="2026-01-12 13:22:18.505451971 +0000 UTC m=+937.549670660" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.511596 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75699d8f8b-jqxcw" event={"ID":"722ce4c4-5517-412c-b3c4-aafc83db85dc","Type":"ContainerStarted","Data":"8e3cd4729babb2aefce0e064e3b26ee4ef48505984b3cfb1e9bcfc379f128e59"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.511631 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75699d8f8b-jqxcw" event={"ID":"722ce4c4-5517-412c-b3c4-aafc83db85dc","Type":"ContainerStarted","Data":"0fd6d69149decf878d95ad23e4c0978e7ec81155b4be8062606ebfaa82eae0ad"} Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.514353 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.514427 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.520941 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "84c1ab4b-8921-4f4a-88dd-adf6e224d62c" (UID: "84c1ab4b-8921-4f4a-88dd-adf6e224d62c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.522771 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.385341983 podStartE2EDuration="9.522753592s" podCreationTimestamp="2026-01-12 13:22:09 +0000 UTC" firstStartedPulling="2026-01-12 13:22:14.827760564 +0000 UTC m=+933.871979254" lastFinishedPulling="2026-01-12 13:22:15.965172174 +0000 UTC m=+935.009390863" observedRunningTime="2026-01-12 13:22:18.522493473 +0000 UTC m=+937.566712183" watchObservedRunningTime="2026-01-12 13:22:18.522753592 +0000 UTC m=+937.566972281" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.554565 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84c1ab4b-8921-4f4a-88dd-adf6e224d62c" (UID: "84c1ab4b-8921-4f4a-88dd-adf6e224d62c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.566170 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.566199 4580 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.566212 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmcxn\" (UniqueName: \"kubernetes.io/projected/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-kube-api-access-jmcxn\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.566223 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.566235 4580 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.566243 4580 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.599652 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-config-data" (OuterVolumeSpecName: "config-data") pod "84c1ab4b-8921-4f4a-88dd-adf6e224d62c" (UID: "84c1ab4b-8921-4f4a-88dd-adf6e224d62c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.669581 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c1ab4b-8921-4f4a-88dd-adf6e224d62c-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.688361 4580 scope.go:117] "RemoveContainer" containerID="4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.708946 4580 scope.go:117] "RemoveContainer" containerID="b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7" Jan 12 13:22:18 crc kubenswrapper[4580]: E0112 13:22:18.709467 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7\": container with ID starting with b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7 not found: ID does not exist" containerID="b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.709505 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7"} err="failed to get container status \"b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7\": rpc error: code = NotFound desc = could not find container \"b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7\": container with ID starting with b441a4248482fe1801f8c251177a3528890dd213b5160d4cf3dae59215a6dea7 not found: ID does not exist" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.709534 4580 scope.go:117] "RemoveContainer" containerID="b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa" Jan 12 13:22:18 crc kubenswrapper[4580]: E0112 13:22:18.709856 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa\": container with ID starting with b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa not found: ID does not exist" containerID="b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.709883 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa"} err="failed to get container status \"b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa\": rpc error: code = NotFound desc = could not find container \"b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa\": container with ID starting with b47105d020da2252db78b2cf8eb43b648d5bf62f668867b101610e700c64b9aa not found: ID does not exist" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.709899 4580 scope.go:117] "RemoveContainer" containerID="4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0" Jan 12 13:22:18 crc kubenswrapper[4580]: E0112 13:22:18.710194 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0\": container with ID starting with 4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0 not found: ID does not exist" containerID="4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.710219 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0"} err="failed to get container status \"4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0\": rpc error: code = NotFound desc = could not find container \"4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0\": container with ID starting with 4a997235d30915c6cf33338ecfa907be77d15bc4231f867f5cbef7736c3cfdc0 not found: ID does not exist" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.811776 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75699d8f8b-jqxcw" podStartSLOduration=3.811756678 podStartE2EDuration="3.811756678s" podCreationTimestamp="2026-01-12 13:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:22:18.564312838 +0000 UTC m=+937.608531548" watchObservedRunningTime="2026-01-12 13:22:18.811756678 +0000 UTC m=+937.855975368" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.824026 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.836621 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842024 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:18 crc kubenswrapper[4580]: E0112 13:22:18.842412 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79305a3-d758-40cb-826b-a6c7aa65dbea" containerName="init" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842425 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79305a3-d758-40cb-826b-a6c7aa65dbea" containerName="init" Jan 12 13:22:18 crc kubenswrapper[4580]: E0112 13:22:18.842445 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="ceilometer-notification-agent" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842450 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="ceilometer-notification-agent" Jan 12 13:22:18 crc kubenswrapper[4580]: E0112 13:22:18.842462 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="proxy-httpd" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842467 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="proxy-httpd" Jan 12 13:22:18 crc kubenswrapper[4580]: E0112 13:22:18.842483 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="sg-core" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842488 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="sg-core" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842633 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="sg-core" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842643 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="proxy-httpd" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842653 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" containerName="ceilometer-notification-agent" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.842665 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79305a3-d758-40cb-826b-a6c7aa65dbea" containerName="init" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.844313 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.864305 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.864985 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.883494 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.885324 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-scripts\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.885351 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-config-data\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.885375 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-run-httpd\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.885390 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.885416 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rn2\" (UniqueName: \"kubernetes.io/projected/063b2f8d-9ef8-4977-9071-ebb135ebc819-kube-api-access-q5rn2\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.885510 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-log-httpd\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.885528 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.963432 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.987865 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f5eb50-8e4a-4ed3-959c-ba730be9863f-logs\") pod \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988020 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-scripts\") pod \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988066 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data-custom\") pod \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988125 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-combined-ca-bundle\") pod \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988155 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkvjt\" (UniqueName: \"kubernetes.io/projected/44f5eb50-8e4a-4ed3-959c-ba730be9863f-kube-api-access-nkvjt\") pod \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988185 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data\") pod \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988220 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44f5eb50-8e4a-4ed3-959c-ba730be9863f-etc-machine-id\") pod \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\" (UID: \"44f5eb50-8e4a-4ed3-959c-ba730be9863f\") " Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988333 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-log-httpd\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988367 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988385 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-scripts\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988411 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-config-data\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988434 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-run-httpd\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988451 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988484 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5rn2\" (UniqueName: \"kubernetes.io/projected/063b2f8d-9ef8-4977-9071-ebb135ebc819-kube-api-access-q5rn2\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988555 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44f5eb50-8e4a-4ed3-959c-ba730be9863f-logs" (OuterVolumeSpecName: "logs") pod "44f5eb50-8e4a-4ed3-959c-ba730be9863f" (UID: "44f5eb50-8e4a-4ed3-959c-ba730be9863f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.988882 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f5eb50-8e4a-4ed3-959c-ba730be9863f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "44f5eb50-8e4a-4ed3-959c-ba730be9863f" (UID: "44f5eb50-8e4a-4ed3-959c-ba730be9863f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.989329 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-run-httpd\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.989357 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-log-httpd\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.993326 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.994394 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f5eb50-8e4a-4ed3-959c-ba730be9863f-kube-api-access-nkvjt" (OuterVolumeSpecName: "kube-api-access-nkvjt") pod "44f5eb50-8e4a-4ed3-959c-ba730be9863f" (UID: "44f5eb50-8e4a-4ed3-959c-ba730be9863f"). InnerVolumeSpecName "kube-api-access-nkvjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.994440 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-scripts\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:18 crc kubenswrapper[4580]: I0112 13:22:18.996349 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-scripts" (OuterVolumeSpecName: "scripts") pod "44f5eb50-8e4a-4ed3-959c-ba730be9863f" (UID: "44f5eb50-8e4a-4ed3-959c-ba730be9863f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.002407 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-config-data\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.004144 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "44f5eb50-8e4a-4ed3-959c-ba730be9863f" (UID: "44f5eb50-8e4a-4ed3-959c-ba730be9863f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.004768 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.008097 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5rn2\" (UniqueName: \"kubernetes.io/projected/063b2f8d-9ef8-4977-9071-ebb135ebc819-kube-api-access-q5rn2\") pod \"ceilometer-0\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " pod="openstack/ceilometer-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.019683 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44f5eb50-8e4a-4ed3-959c-ba730be9863f" (UID: "44f5eb50-8e4a-4ed3-959c-ba730be9863f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.039674 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data" (OuterVolumeSpecName: "config-data") pod "44f5eb50-8e4a-4ed3-959c-ba730be9863f" (UID: "44f5eb50-8e4a-4ed3-959c-ba730be9863f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.090924 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44f5eb50-8e4a-4ed3-959c-ba730be9863f-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.090948 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.090958 4580 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.090982 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.090991 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkvjt\" (UniqueName: \"kubernetes.io/projected/44f5eb50-8e4a-4ed3-959c-ba730be9863f-kube-api-access-nkvjt\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.091002 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44f5eb50-8e4a-4ed3-959c-ba730be9863f-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.091010 4580 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44f5eb50-8e4a-4ed3-959c-ba730be9863f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.174292 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.292195 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c1ab4b-8921-4f4a-88dd-adf6e224d62c" path="/var/lib/kubelet/pods/84c1ab4b-8921-4f4a-88dd-adf6e224d62c/volumes" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.521992 4580 generic.go:334] "Generic (PLEG): container finished" podID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerID="7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e" exitCode=0 Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.522028 4580 generic.go:334] "Generic (PLEG): container finished" podID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerID="f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94" exitCode=143 Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.522911 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.523381 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44f5eb50-8e4a-4ed3-959c-ba730be9863f","Type":"ContainerDied","Data":"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e"} Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.523411 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44f5eb50-8e4a-4ed3-959c-ba730be9863f","Type":"ContainerDied","Data":"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94"} Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.523424 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"44f5eb50-8e4a-4ed3-959c-ba730be9863f","Type":"ContainerDied","Data":"78dac73694649c7a22199d93c9cd253b8b626b142048e15730babb3f7b8e43ac"} Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.523444 4580 scope.go:117] "RemoveContainer" containerID="7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.547056 4580 scope.go:117] "RemoveContainer" containerID="f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.551597 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.558536 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.571087 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:19 crc kubenswrapper[4580]: E0112 13:22:19.571434 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerName="cinder-api-log" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.571453 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerName="cinder-api-log" Jan 12 13:22:19 crc kubenswrapper[4580]: E0112 13:22:19.571466 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerName="cinder-api" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.571473 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerName="cinder-api" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.571634 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerName="cinder-api" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.571664 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" containerName="cinder-api-log" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.572702 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.578739 4580 scope.go:117] "RemoveContainer" containerID="7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.579288 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.579633 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.579750 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 12 13:22:19 crc kubenswrapper[4580]: E0112 13:22:19.581221 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e\": container with ID starting with 7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e not found: ID does not exist" containerID="7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.581254 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e"} err="failed to get container status \"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e\": rpc error: code = NotFound desc = could not find container \"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e\": container with ID starting with 7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e not found: ID does not exist" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.581277 4580 scope.go:117] "RemoveContainer" containerID="f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94" Jan 12 13:22:19 crc kubenswrapper[4580]: E0112 13:22:19.583501 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94\": container with ID starting with f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94 not found: ID does not exist" containerID="f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.583525 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94"} err="failed to get container status \"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94\": rpc error: code = NotFound desc = could not find container \"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94\": container with ID starting with f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94 not found: ID does not exist" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.583545 4580 scope.go:117] "RemoveContainer" containerID="7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.584080 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e"} err="failed to get container status \"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e\": rpc error: code = NotFound desc = could not find container \"7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e\": container with ID starting with 7cd55da63c7bf98957145b7bc3dc8045539570cde204c28fa6931a5f6f3f298e not found: ID does not exist" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.584098 4580 scope.go:117] "RemoveContainer" containerID="f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.585510 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94"} err="failed to get container status \"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94\": rpc error: code = NotFound desc = could not find container \"f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94\": container with ID starting with f7edd79807bb939287c182e2aa39fa138e6ee888c3422182fecd101eeba4dd94 not found: ID does not exist" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.585930 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.597650 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.707513 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhdg\" (UniqueName: \"kubernetes.io/projected/977708c4-8759-44d1-8d90-6226077e8044-kube-api-access-8vhdg\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.707688 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977708c4-8759-44d1-8d90-6226077e8044-logs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.707801 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-public-tls-certs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.707910 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.707994 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/977708c4-8759-44d1-8d90-6226077e8044-etc-machine-id\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.709670 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-scripts\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.709739 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-config-data\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.709776 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.710150 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-config-data-custom\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812209 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhdg\" (UniqueName: \"kubernetes.io/projected/977708c4-8759-44d1-8d90-6226077e8044-kube-api-access-8vhdg\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812260 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977708c4-8759-44d1-8d90-6226077e8044-logs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812286 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-public-tls-certs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812321 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812340 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/977708c4-8759-44d1-8d90-6226077e8044-etc-machine-id\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812401 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-scripts\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812422 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-config-data\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812444 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812474 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-config-data-custom\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.812752 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977708c4-8759-44d1-8d90-6226077e8044-logs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.813663 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/977708c4-8759-44d1-8d90-6226077e8044-etc-machine-id\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.819365 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-config-data\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.819475 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.821188 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-scripts\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.821239 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-public-tls-certs\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.821571 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.821809 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/977708c4-8759-44d1-8d90-6226077e8044-config-data-custom\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.829885 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhdg\" (UniqueName: \"kubernetes.io/projected/977708c4-8759-44d1-8d90-6226077e8044-kube-api-access-8vhdg\") pod \"cinder-api-0\" (UID: \"977708c4-8759-44d1-8d90-6226077e8044\") " pod="openstack/cinder-api-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.878726 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 12 13:22:19 crc kubenswrapper[4580]: I0112 13:22:19.892371 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.426244 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 12 13:22:20 crc kubenswrapper[4580]: W0112 13:22:20.475250 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod977708c4_8759_44d1_8d90_6226077e8044.slice/crio-dda1b4c57fcb43799a1773189288556b5ab88adf3df5a94fa17df0b96ca10ecb WatchSource:0}: Error finding container dda1b4c57fcb43799a1773189288556b5ab88adf3df5a94fa17df0b96ca10ecb: Status 404 returned error can't find the container with id dda1b4c57fcb43799a1773189288556b5ab88adf3df5a94fa17df0b96ca10ecb Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.539479 4580 generic.go:334] "Generic (PLEG): container finished" podID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerID="d229c75be6283d85033aa466caeb340478b38a876d8c032f1e88e609bb362a9a" exitCode=137 Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.539692 4580 generic.go:334] "Generic (PLEG): container finished" podID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerID="97ec417f47abd1751148fc4f57c93a9cb3f0ceca6a44d52a0efd7ffb9cca693a" exitCode=137 Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.539735 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6868cd5fd5-ct7dn" event={"ID":"db7a06f9-1a77-4a21-ac05-0c73655fa8d0","Type":"ContainerDied","Data":"d229c75be6283d85033aa466caeb340478b38a876d8c032f1e88e609bb362a9a"} Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.539763 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6868cd5fd5-ct7dn" event={"ID":"db7a06f9-1a77-4a21-ac05-0c73655fa8d0","Type":"ContainerDied","Data":"97ec417f47abd1751148fc4f57c93a9cb3f0ceca6a44d52a0efd7ffb9cca693a"} Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.550816 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"977708c4-8759-44d1-8d90-6226077e8044","Type":"ContainerStarted","Data":"dda1b4c57fcb43799a1773189288556b5ab88adf3df5a94fa17df0b96ca10ecb"} Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.552332 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerStarted","Data":"4c1d58120e1f3d3d4baf4906f7a26cd4683eeb4a261b7a9c8cf9987efb6b7006"} Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.552356 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerStarted","Data":"9e721ccd675b2b96743868e560ef5696509e8af9effd3aad609d70109e771730"} Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.555686 4580 generic.go:334] "Generic (PLEG): container finished" podID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerID="236053ae734a01a9f2ac22c12a153508cd784597dbf97b70fdc6e89668916550" exitCode=137 Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.555720 4580 generic.go:334] "Generic (PLEG): container finished" podID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerID="9b2735e034dc026b388dcd0a7bf7297e976eda33e67baeccb45d4c849b4c2ee0" exitCode=137 Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.555753 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f4fd9547-lhpct" event={"ID":"fca18973-3724-49c5-b8c4-cf6beb66c288","Type":"ContainerDied","Data":"236053ae734a01a9f2ac22c12a153508cd784597dbf97b70fdc6e89668916550"} Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.555793 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f4fd9547-lhpct" event={"ID":"fca18973-3724-49c5-b8c4-cf6beb66c288","Type":"ContainerDied","Data":"9b2735e034dc026b388dcd0a7bf7297e976eda33e67baeccb45d4c849b4c2ee0"} Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.564422 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c66d9fb7c-tgbgc" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.636956 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-596bbb8b6-5jfvl"] Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.637281 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-596bbb8b6-5jfvl" podUID="5b622df8-141e-468d-8f8d-86622f286566" containerName="neutron-api" containerID="cri-o://9d8b606903131cb31075845e04f2a766ad2985affb6777d5c929dc3513c2d8bc" gracePeriod=30 Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.637674 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-596bbb8b6-5jfvl" podUID="5b622df8-141e-468d-8f8d-86622f286566" containerName="neutron-httpd" containerID="cri-o://cfd0f8fb73bc0f8bfbda09f6ff39be45a13c3eee5aa8ecf09832d7a2b96cdaa7" gracePeriod=30 Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.807588 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.856318 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkj25\" (UniqueName: \"kubernetes.io/projected/fca18973-3724-49c5-b8c4-cf6beb66c288-kube-api-access-kkj25\") pod \"fca18973-3724-49c5-b8c4-cf6beb66c288\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.856441 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fca18973-3724-49c5-b8c4-cf6beb66c288-horizon-secret-key\") pod \"fca18973-3724-49c5-b8c4-cf6beb66c288\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.856511 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-config-data\") pod \"fca18973-3724-49c5-b8c4-cf6beb66c288\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.856606 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-scripts\") pod \"fca18973-3724-49c5-b8c4-cf6beb66c288\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.856728 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca18973-3724-49c5-b8c4-cf6beb66c288-logs\") pod \"fca18973-3724-49c5-b8c4-cf6beb66c288\" (UID: \"fca18973-3724-49c5-b8c4-cf6beb66c288\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.859182 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca18973-3724-49c5-b8c4-cf6beb66c288-logs" (OuterVolumeSpecName: "logs") pod "fca18973-3724-49c5-b8c4-cf6beb66c288" (UID: "fca18973-3724-49c5-b8c4-cf6beb66c288"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.863653 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca18973-3724-49c5-b8c4-cf6beb66c288-kube-api-access-kkj25" (OuterVolumeSpecName: "kube-api-access-kkj25") pod "fca18973-3724-49c5-b8c4-cf6beb66c288" (UID: "fca18973-3724-49c5-b8c4-cf6beb66c288"). InnerVolumeSpecName "kube-api-access-kkj25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.864865 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca18973-3724-49c5-b8c4-cf6beb66c288-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fca18973-3724-49c5-b8c4-cf6beb66c288" (UID: "fca18973-3724-49c5-b8c4-cf6beb66c288"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.886825 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-config-data" (OuterVolumeSpecName: "config-data") pod "fca18973-3724-49c5-b8c4-cf6beb66c288" (UID: "fca18973-3724-49c5-b8c4-cf6beb66c288"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.890621 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-scripts" (OuterVolumeSpecName: "scripts") pod "fca18973-3724-49c5-b8c4-cf6beb66c288" (UID: "fca18973-3724-49c5-b8c4-cf6beb66c288"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.891075 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.960229 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6bcr\" (UniqueName: \"kubernetes.io/projected/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-kube-api-access-n6bcr\") pod \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.960561 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-config-data\") pod \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.960644 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-scripts\") pod \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.960717 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-logs\") pod \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.960750 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-horizon-secret-key\") pod \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\" (UID: \"db7a06f9-1a77-4a21-ac05-0c73655fa8d0\") " Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.961002 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.961013 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca18973-3724-49c5-b8c4-cf6beb66c288-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.961022 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkj25\" (UniqueName: \"kubernetes.io/projected/fca18973-3724-49c5-b8c4-cf6beb66c288-kube-api-access-kkj25\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.961031 4580 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fca18973-3724-49c5-b8c4-cf6beb66c288-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.961040 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fca18973-3724-49c5-b8c4-cf6beb66c288-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.961790 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-logs" (OuterVolumeSpecName: "logs") pod "db7a06f9-1a77-4a21-ac05-0c73655fa8d0" (UID: "db7a06f9-1a77-4a21-ac05-0c73655fa8d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.964526 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-kube-api-access-n6bcr" (OuterVolumeSpecName: "kube-api-access-n6bcr") pod "db7a06f9-1a77-4a21-ac05-0c73655fa8d0" (UID: "db7a06f9-1a77-4a21-ac05-0c73655fa8d0"). InnerVolumeSpecName "kube-api-access-n6bcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.972277 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "db7a06f9-1a77-4a21-ac05-0c73655fa8d0" (UID: "db7a06f9-1a77-4a21-ac05-0c73655fa8d0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.980954 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-config-data" (OuterVolumeSpecName: "config-data") pod "db7a06f9-1a77-4a21-ac05-0c73655fa8d0" (UID: "db7a06f9-1a77-4a21-ac05-0c73655fa8d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:20 crc kubenswrapper[4580]: I0112 13:22:20.987930 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-scripts" (OuterVolumeSpecName: "scripts") pod "db7a06f9-1a77-4a21-ac05-0c73655fa8d0" (UID: "db7a06f9-1a77-4a21-ac05-0c73655fa8d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.062400 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.062428 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.062441 4580 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.062454 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6bcr\" (UniqueName: \"kubernetes.io/projected/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-kube-api-access-n6bcr\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.062464 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db7a06f9-1a77-4a21-ac05-0c73655fa8d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.291825 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f5eb50-8e4a-4ed3-959c-ba730be9863f" path="/var/lib/kubelet/pods/44f5eb50-8e4a-4ed3-959c-ba730be9863f/volumes" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.578583 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"977708c4-8759-44d1-8d90-6226077e8044","Type":"ContainerStarted","Data":"de844c909d45f7206a006bc2fb946e409a382ec1ce704a4bd80ce06e659ba5f9"} Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.582480 4580 generic.go:334] "Generic (PLEG): container finished" podID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerID="4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf" exitCode=137 Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.582566 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bb57d5f45-mb7xb" event={"ID":"82d9d66d-ff92-4164-96a9-c82a919cce00","Type":"ContainerDied","Data":"4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf"} Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.586290 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerStarted","Data":"eecdc9c1445138491de857060fd761d45cbda00f3da822ae991700301a1fc4fa"} Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.589150 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74f4fd9547-lhpct" event={"ID":"fca18973-3724-49c5-b8c4-cf6beb66c288","Type":"ContainerDied","Data":"223135d4ddfb62c157896dd77e0ec342c29da5eeb2ccf991089e63a95fc86227"} Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.589244 4580 scope.go:117] "RemoveContainer" containerID="236053ae734a01a9f2ac22c12a153508cd784597dbf97b70fdc6e89668916550" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.589236 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74f4fd9547-lhpct" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.595467 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6868cd5fd5-ct7dn" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.595469 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6868cd5fd5-ct7dn" event={"ID":"db7a06f9-1a77-4a21-ac05-0c73655fa8d0","Type":"ContainerDied","Data":"5775d1544fd0d7d266e9ba8c60d832e0e10fa84af4c896d39519e2bc458ca743"} Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.602254 4580 generic.go:334] "Generic (PLEG): container finished" podID="5b622df8-141e-468d-8f8d-86622f286566" containerID="cfd0f8fb73bc0f8bfbda09f6ff39be45a13c3eee5aa8ecf09832d7a2b96cdaa7" exitCode=0 Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.602297 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596bbb8b6-5jfvl" event={"ID":"5b622df8-141e-468d-8f8d-86622f286566","Type":"ContainerDied","Data":"cfd0f8fb73bc0f8bfbda09f6ff39be45a13c3eee5aa8ecf09832d7a2b96cdaa7"} Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.723707 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74f4fd9547-lhpct"] Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.731373 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74f4fd9547-lhpct"] Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.737431 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6868cd5fd5-ct7dn"] Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.743391 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6868cd5fd5-ct7dn"] Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.892836 4580 scope.go:117] "RemoveContainer" containerID="9b2735e034dc026b388dcd0a7bf7297e976eda33e67baeccb45d4c849b4c2ee0" Jan 12 13:22:21 crc kubenswrapper[4580]: I0112 13:22:21.927091 4580 scope.go:117] "RemoveContainer" containerID="d229c75be6283d85033aa466caeb340478b38a876d8c032f1e88e609bb362a9a" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.062784 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.085054 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-config-data\") pod \"82d9d66d-ff92-4164-96a9-c82a919cce00\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.085137 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d9d66d-ff92-4164-96a9-c82a919cce00-logs\") pod \"82d9d66d-ff92-4164-96a9-c82a919cce00\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.085216 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-scripts\") pod \"82d9d66d-ff92-4164-96a9-c82a919cce00\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.085240 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82d9d66d-ff92-4164-96a9-c82a919cce00-horizon-secret-key\") pod \"82d9d66d-ff92-4164-96a9-c82a919cce00\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.085411 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrxld\" (UniqueName: \"kubernetes.io/projected/82d9d66d-ff92-4164-96a9-c82a919cce00-kube-api-access-qrxld\") pod \"82d9d66d-ff92-4164-96a9-c82a919cce00\" (UID: \"82d9d66d-ff92-4164-96a9-c82a919cce00\") " Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.085822 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82d9d66d-ff92-4164-96a9-c82a919cce00-logs" (OuterVolumeSpecName: "logs") pod "82d9d66d-ff92-4164-96a9-c82a919cce00" (UID: "82d9d66d-ff92-4164-96a9-c82a919cce00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.086208 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d9d66d-ff92-4164-96a9-c82a919cce00-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.098145 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d9d66d-ff92-4164-96a9-c82a919cce00-kube-api-access-qrxld" (OuterVolumeSpecName: "kube-api-access-qrxld") pod "82d9d66d-ff92-4164-96a9-c82a919cce00" (UID: "82d9d66d-ff92-4164-96a9-c82a919cce00"). InnerVolumeSpecName "kube-api-access-qrxld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.098183 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d9d66d-ff92-4164-96a9-c82a919cce00-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "82d9d66d-ff92-4164-96a9-c82a919cce00" (UID: "82d9d66d-ff92-4164-96a9-c82a919cce00"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.135369 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.135591 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-scripts" (OuterVolumeSpecName: "scripts") pod "82d9d66d-ff92-4164-96a9-c82a919cce00" (UID: "82d9d66d-ff92-4164-96a9-c82a919cce00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.135815 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-config-data" (OuterVolumeSpecName: "config-data") pod "82d9d66d-ff92-4164-96a9-c82a919cce00" (UID: "82d9d66d-ff92-4164-96a9-c82a919cce00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.185615 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.193027 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.193057 4580 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/82d9d66d-ff92-4164-96a9-c82a919cce00-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.193069 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrxld\" (UniqueName: \"kubernetes.io/projected/82d9d66d-ff92-4164-96a9-c82a919cce00-kube-api-access-qrxld\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.193081 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82d9d66d-ff92-4164-96a9-c82a919cce00-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.260228 4580 scope.go:117] "RemoveContainer" containerID="97ec417f47abd1751148fc4f57c93a9cb3f0ceca6a44d52a0efd7ffb9cca693a" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.615592 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"977708c4-8759-44d1-8d90-6226077e8044","Type":"ContainerStarted","Data":"fa61cadab5ac6e4fc0518e6ce96e02ebbeefd23f2992e7e0970ee9129cb54f99"} Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.616663 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.618125 4580 generic.go:334] "Generic (PLEG): container finished" podID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerID="9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96" exitCode=137 Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.618168 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bb57d5f45-mb7xb" event={"ID":"82d9d66d-ff92-4164-96a9-c82a919cce00","Type":"ContainerDied","Data":"9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96"} Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.618186 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bb57d5f45-mb7xb" event={"ID":"82d9d66d-ff92-4164-96a9-c82a919cce00","Type":"ContainerDied","Data":"bf259687b6c05e8e9b9feb7e58f94d294196f27dc3be954572d04e9733b83001"} Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.618203 4580 scope.go:117] "RemoveContainer" containerID="9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.618282 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bb57d5f45-mb7xb" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.629866 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerStarted","Data":"70aacb67e208b2728685ad433fa21a479aa6120d1249d45c19a4b7919e46d7ec"} Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.657363 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.657345239 podStartE2EDuration="3.657345239s" podCreationTimestamp="2026-01-12 13:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:22:22.641179495 +0000 UTC m=+941.685398184" watchObservedRunningTime="2026-01-12 13:22:22.657345239 +0000 UTC m=+941.701563930" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.676533 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bb57d5f45-mb7xb"] Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.680755 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bb57d5f45-mb7xb"] Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.776392 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfsd9"] Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.792182 4580 scope.go:117] "RemoveContainer" containerID="4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.815385 4580 scope.go:117] "RemoveContainer" containerID="9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96" Jan 12 13:22:22 crc kubenswrapper[4580]: E0112 13:22:22.815776 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96\": container with ID starting with 9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96 not found: ID does not exist" containerID="9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.815837 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96"} err="failed to get container status \"9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96\": rpc error: code = NotFound desc = could not find container \"9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96\": container with ID starting with 9e39f1866a90ff5e533ca3990331bc115d46656225c4f17c740b6e3f46bf2f96 not found: ID does not exist" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.815859 4580 scope.go:117] "RemoveContainer" containerID="4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf" Jan 12 13:22:22 crc kubenswrapper[4580]: E0112 13:22:22.816127 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf\": container with ID starting with 4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf not found: ID does not exist" containerID="4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf" Jan 12 13:22:22 crc kubenswrapper[4580]: I0112 13:22:22.816162 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf"} err="failed to get container status \"4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf\": rpc error: code = NotFound desc = could not find container \"4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf\": container with ID starting with 4bf51608ffbb6382f8a1657a6350aa5eb00895dc10e9f095ae18ca64dc498fdf not found: ID does not exist" Jan 12 13:22:23 crc kubenswrapper[4580]: I0112 13:22:23.290360 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" path="/var/lib/kubelet/pods/82d9d66d-ff92-4164-96a9-c82a919cce00/volumes" Jan 12 13:22:23 crc kubenswrapper[4580]: I0112 13:22:23.291173 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" path="/var/lib/kubelet/pods/db7a06f9-1a77-4a21-ac05-0c73655fa8d0/volumes" Jan 12 13:22:23 crc kubenswrapper[4580]: I0112 13:22:23.291752 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" path="/var/lib/kubelet/pods/fca18973-3724-49c5-b8c4-cf6beb66c288/volumes" Jan 12 13:22:23 crc kubenswrapper[4580]: I0112 13:22:23.470677 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:22:23 crc kubenswrapper[4580]: I0112 13:22:23.509023 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:22:23 crc kubenswrapper[4580]: I0112 13:22:23.640612 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zfsd9" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="registry-server" containerID="cri-o://bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3" gracePeriod=2 Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.068474 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.128931 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-catalog-content\") pod \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.153293 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-utilities\") pod \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.153338 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssp42\" (UniqueName: \"kubernetes.io/projected/70ecad68-8604-45ba-84e5-4a0aa1d7464a-kube-api-access-ssp42\") pod \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\" (UID: \"70ecad68-8604-45ba-84e5-4a0aa1d7464a\") " Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.154988 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-utilities" (OuterVolumeSpecName: "utilities") pod "70ecad68-8604-45ba-84e5-4a0aa1d7464a" (UID: "70ecad68-8604-45ba-84e5-4a0aa1d7464a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.160578 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ecad68-8604-45ba-84e5-4a0aa1d7464a-kube-api-access-ssp42" (OuterVolumeSpecName: "kube-api-access-ssp42") pod "70ecad68-8604-45ba-84e5-4a0aa1d7464a" (UID: "70ecad68-8604-45ba-84e5-4a0aa1d7464a"). InnerVolumeSpecName "kube-api-access-ssp42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.257260 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.257303 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssp42\" (UniqueName: \"kubernetes.io/projected/70ecad68-8604-45ba-84e5-4a0aa1d7464a-kube-api-access-ssp42\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.274876 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70ecad68-8604-45ba-84e5-4a0aa1d7464a" (UID: "70ecad68-8604-45ba-84e5-4a0aa1d7464a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.359005 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ecad68-8604-45ba-84e5-4a0aa1d7464a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.653122 4580 generic.go:334] "Generic (PLEG): container finished" podID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerID="bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3" exitCode=0 Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.653187 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfsd9" event={"ID":"70ecad68-8604-45ba-84e5-4a0aa1d7464a","Type":"ContainerDied","Data":"bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3"} Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.653225 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfsd9" event={"ID":"70ecad68-8604-45ba-84e5-4a0aa1d7464a","Type":"ContainerDied","Data":"75257d67a7c6550c96fb50afd4366407695134d61e9eb3089dc23d617cd9a06d"} Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.653242 4580 scope.go:117] "RemoveContainer" containerID="bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.653338 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfsd9" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.671288 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerStarted","Data":"f9260730ead159ba2e111b6438b922e355e0d082b854dbd8ab202f8cd763fe3f"} Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.671757 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.683487 4580 generic.go:334] "Generic (PLEG): container finished" podID="5b622df8-141e-468d-8f8d-86622f286566" containerID="9d8b606903131cb31075845e04f2a766ad2985affb6777d5c929dc3513c2d8bc" exitCode=0 Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.691580 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596bbb8b6-5jfvl" event={"ID":"5b622df8-141e-468d-8f8d-86622f286566","Type":"ContainerDied","Data":"9d8b606903131cb31075845e04f2a766ad2985affb6777d5c929dc3513c2d8bc"} Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.691681 4580 scope.go:117] "RemoveContainer" containerID="cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.698056 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfsd9"] Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.742474 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zfsd9"] Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.743195 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.442985462 podStartE2EDuration="6.743175222s" podCreationTimestamp="2026-01-12 13:22:18 +0000 UTC" firstStartedPulling="2026-01-12 13:22:19.611381638 +0000 UTC m=+938.655600327" lastFinishedPulling="2026-01-12 13:22:23.911571397 +0000 UTC m=+942.955790087" observedRunningTime="2026-01-12 13:22:24.704768166 +0000 UTC m=+943.748986856" watchObservedRunningTime="2026-01-12 13:22:24.743175222 +0000 UTC m=+943.787393913" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.768237 4580 scope.go:117] "RemoveContainer" containerID="3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.876619 4580 scope.go:117] "RemoveContainer" containerID="bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3" Jan 12 13:22:24 crc kubenswrapper[4580]: E0112 13:22:24.877519 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3\": container with ID starting with bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3 not found: ID does not exist" containerID="bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.877558 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3"} err="failed to get container status \"bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3\": rpc error: code = NotFound desc = could not find container \"bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3\": container with ID starting with bad5d21f59d5cb1f71d1e9c9ee58ac0efa7bba0c049695d32566317c4f9140e3 not found: ID does not exist" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.877610 4580 scope.go:117] "RemoveContainer" containerID="cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9" Jan 12 13:22:24 crc kubenswrapper[4580]: E0112 13:22:24.877892 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9\": container with ID starting with cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9 not found: ID does not exist" containerID="cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.877940 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9"} err="failed to get container status \"cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9\": rpc error: code = NotFound desc = could not find container \"cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9\": container with ID starting with cc1781e95c85dc8b31d65e5ceb8a81a9e5546e7507206e0159c60e8031b528d9 not found: ID does not exist" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.877955 4580 scope.go:117] "RemoveContainer" containerID="3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803" Jan 12 13:22:24 crc kubenswrapper[4580]: E0112 13:22:24.878185 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803\": container with ID starting with 3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803 not found: ID does not exist" containerID="3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.878235 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803"} err="failed to get container status \"3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803\": rpc error: code = NotFound desc = could not find container \"3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803\": container with ID starting with 3acdbcfc506183ce7e9ef04411db283a84406437dd2448998e1046f4bdaa9803 not found: ID does not exist" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.901387 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.938846 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.974862 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvb7w\" (UniqueName: \"kubernetes.io/projected/5b622df8-141e-468d-8f8d-86622f286566-kube-api-access-qvb7w\") pod \"5b622df8-141e-468d-8f8d-86622f286566\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.975016 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-combined-ca-bundle\") pod \"5b622df8-141e-468d-8f8d-86622f286566\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.975130 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-ovndb-tls-certs\") pod \"5b622df8-141e-468d-8f8d-86622f286566\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.975190 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-httpd-config\") pod \"5b622df8-141e-468d-8f8d-86622f286566\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.975319 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-config\") pod \"5b622df8-141e-468d-8f8d-86622f286566\" (UID: \"5b622df8-141e-468d-8f8d-86622f286566\") " Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.984410 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5b622df8-141e-468d-8f8d-86622f286566" (UID: "5b622df8-141e-468d-8f8d-86622f286566"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.985531 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8tzg"] Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.985719 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685444497c-q8tzg" podUID="0ab93f8e-1504-47d9-af38-197cbcc54feb" containerName="dnsmasq-dns" containerID="cri-o://5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396" gracePeriod=10 Jan 12 13:22:24 crc kubenswrapper[4580]: I0112 13:22:24.991004 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b622df8-141e-468d-8f8d-86622f286566-kube-api-access-qvb7w" (OuterVolumeSpecName: "kube-api-access-qvb7w") pod "5b622df8-141e-468d-8f8d-86622f286566" (UID: "5b622df8-141e-468d-8f8d-86622f286566"). InnerVolumeSpecName "kube-api-access-qvb7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.064443 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.079438 4580 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.079477 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvb7w\" (UniqueName: \"kubernetes.io/projected/5b622df8-141e-468d-8f8d-86622f286566-kube-api-access-qvb7w\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.085486 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b622df8-141e-468d-8f8d-86622f286566" (UID: "5b622df8-141e-468d-8f8d-86622f286566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.129185 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-config" (OuterVolumeSpecName: "config") pod "5b622df8-141e-468d-8f8d-86622f286566" (UID: "5b622df8-141e-468d-8f8d-86622f286566"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.130198 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5b622df8-141e-468d-8f8d-86622f286566" (UID: "5b622df8-141e-468d-8f8d-86622f286566"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.182392 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.182433 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.182446 4580 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b622df8-141e-468d-8f8d-86622f286566-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.314005 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" path="/var/lib/kubelet/pods/70ecad68-8604-45ba-84e5-4a0aa1d7464a/volumes" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.314814 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ffb74c678-h5ddl" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.322855 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.374602 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8699b457dd-z2fkt" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.416620 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.423997 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.491349 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54b765ff94-66rkz"] Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.634916 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.700652 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqmkl\" (UniqueName: \"kubernetes.io/projected/0ab93f8e-1504-47d9-af38-197cbcc54feb-kube-api-access-dqmkl\") pod \"0ab93f8e-1504-47d9-af38-197cbcc54feb\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.700726 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-nb\") pod \"0ab93f8e-1504-47d9-af38-197cbcc54feb\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.700839 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-swift-storage-0\") pod \"0ab93f8e-1504-47d9-af38-197cbcc54feb\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.700912 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-config\") pod \"0ab93f8e-1504-47d9-af38-197cbcc54feb\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.701121 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-sb\") pod \"0ab93f8e-1504-47d9-af38-197cbcc54feb\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.701252 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-svc\") pod \"0ab93f8e-1504-47d9-af38-197cbcc54feb\" (UID: \"0ab93f8e-1504-47d9-af38-197cbcc54feb\") " Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.702251 4580 generic.go:334] "Generic (PLEG): container finished" podID="0ab93f8e-1504-47d9-af38-197cbcc54feb" containerID="5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396" exitCode=0 Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.702322 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8tzg" event={"ID":"0ab93f8e-1504-47d9-af38-197cbcc54feb","Type":"ContainerDied","Data":"5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396"} Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.702355 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-q8tzg" event={"ID":"0ab93f8e-1504-47d9-af38-197cbcc54feb","Type":"ContainerDied","Data":"c59dd887a0142c00616d2cc514e1d7393d73aeefaea776d73f2c7a09f5026649"} Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.702376 4580 scope.go:117] "RemoveContainer" containerID="5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.702490 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-q8tzg" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.707227 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab93f8e-1504-47d9-af38-197cbcc54feb-kube-api-access-dqmkl" (OuterVolumeSpecName: "kube-api-access-dqmkl") pod "0ab93f8e-1504-47d9-af38-197cbcc54feb" (UID: "0ab93f8e-1504-47d9-af38-197cbcc54feb"). InnerVolumeSpecName "kube-api-access-dqmkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.722419 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54b765ff94-66rkz" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon-log" containerID="cri-o://0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7" gracePeriod=30 Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.722529 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-596bbb8b6-5jfvl" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.723116 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-596bbb8b6-5jfvl" event={"ID":"5b622df8-141e-468d-8f8d-86622f286566","Type":"ContainerDied","Data":"9446eeab8602add5285541e110e9ad9ad5950a9a181dfdf72a56e834a1cb735a"} Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.724205 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9b1540ce-a351-4090-bf54-e253994d9020" containerName="cinder-scheduler" containerID="cri-o://6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e" gracePeriod=30 Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.724513 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54b765ff94-66rkz" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" containerID="cri-o://59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260" gracePeriod=30 Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.724679 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9b1540ce-a351-4090-bf54-e253994d9020" containerName="probe" containerID="cri-o://a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd" gracePeriod=30 Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.755666 4580 scope.go:117] "RemoveContainer" containerID="674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.768065 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-596bbb8b6-5jfvl"] Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.773005 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-596bbb8b6-5jfvl"] Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.785705 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ab93f8e-1504-47d9-af38-197cbcc54feb" (UID: "0ab93f8e-1504-47d9-af38-197cbcc54feb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.785749 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-config" (OuterVolumeSpecName: "config") pod "0ab93f8e-1504-47d9-af38-197cbcc54feb" (UID: "0ab93f8e-1504-47d9-af38-197cbcc54feb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.809327 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqmkl\" (UniqueName: \"kubernetes.io/projected/0ab93f8e-1504-47d9-af38-197cbcc54feb-kube-api-access-dqmkl\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.809352 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.809361 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.819634 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ab93f8e-1504-47d9-af38-197cbcc54feb" (UID: "0ab93f8e-1504-47d9-af38-197cbcc54feb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.824466 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ab93f8e-1504-47d9-af38-197cbcc54feb" (UID: "0ab93f8e-1504-47d9-af38-197cbcc54feb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.843995 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ab93f8e-1504-47d9-af38-197cbcc54feb" (UID: "0ab93f8e-1504-47d9-af38-197cbcc54feb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.891197 4580 scope.go:117] "RemoveContainer" containerID="5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396" Jan 12 13:22:25 crc kubenswrapper[4580]: E0112 13:22:25.891575 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396\": container with ID starting with 5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396 not found: ID does not exist" containerID="5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.891605 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396"} err="failed to get container status \"5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396\": rpc error: code = NotFound desc = could not find container \"5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396\": container with ID starting with 5d14b56a9d104cd5211c67e006b8f05f173d06485ec4e2993d8927872d806396 not found: ID does not exist" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.891627 4580 scope.go:117] "RemoveContainer" containerID="674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761" Jan 12 13:22:25 crc kubenswrapper[4580]: E0112 13:22:25.891985 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761\": container with ID starting with 674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761 not found: ID does not exist" containerID="674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.892009 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761"} err="failed to get container status \"674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761\": rpc error: code = NotFound desc = could not find container \"674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761\": container with ID starting with 674b20e77d0a2fe8c0f4d18729f80eb7e26b739a83098c781c59d507bbfc4761 not found: ID does not exist" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.892023 4580 scope.go:117] "RemoveContainer" containerID="cfd0f8fb73bc0f8bfbda09f6ff39be45a13c3eee5aa8ecf09832d7a2b96cdaa7" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.917346 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.917369 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.917379 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ab93f8e-1504-47d9-af38-197cbcc54feb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:25 crc kubenswrapper[4580]: I0112 13:22:25.924434 4580 scope.go:117] "RemoveContainer" containerID="9d8b606903131cb31075845e04f2a766ad2985affb6777d5c929dc3513c2d8bc" Jan 12 13:22:26 crc kubenswrapper[4580]: I0112 13:22:26.027012 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8tzg"] Jan 12 13:22:26 crc kubenswrapper[4580]: I0112 13:22:26.036214 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-q8tzg"] Jan 12 13:22:26 crc kubenswrapper[4580]: I0112 13:22:26.052134 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:26 crc kubenswrapper[4580]: I0112 13:22:26.281944 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:26 crc kubenswrapper[4580]: I0112 13:22:26.734073 4580 generic.go:334] "Generic (PLEG): container finished" podID="9b1540ce-a351-4090-bf54-e253994d9020" containerID="a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd" exitCode=0 Jan 12 13:22:26 crc kubenswrapper[4580]: I0112 13:22:26.734165 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b1540ce-a351-4090-bf54-e253994d9020","Type":"ContainerDied","Data":"a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd"} Jan 12 13:22:27 crc kubenswrapper[4580]: I0112 13:22:27.248901 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:27 crc kubenswrapper[4580]: I0112 13:22:27.265071 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75699d8f8b-jqxcw" Jan 12 13:22:27 crc kubenswrapper[4580]: I0112 13:22:27.297962 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab93f8e-1504-47d9-af38-197cbcc54feb" path="/var/lib/kubelet/pods/0ab93f8e-1504-47d9-af38-197cbcc54feb/volumes" Jan 12 13:22:27 crc kubenswrapper[4580]: I0112 13:22:27.298589 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b622df8-141e-468d-8f8d-86622f286566" path="/var/lib/kubelet/pods/5b622df8-141e-468d-8f8d-86622f286566/volumes" Jan 12 13:22:27 crc kubenswrapper[4580]: I0112 13:22:27.352501 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-579c8f556d-z7gld"] Jan 12 13:22:27 crc kubenswrapper[4580]: I0112 13:22:27.744504 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-579c8f556d-z7gld" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api-log" containerID="cri-o://4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099" gracePeriod=30 Jan 12 13:22:27 crc kubenswrapper[4580]: I0112 13:22:27.744608 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-579c8f556d-z7gld" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api" containerID="cri-o://97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607" gracePeriod=30 Jan 12 13:22:27 crc kubenswrapper[4580]: I0112 13:22:27.752682 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-579c8f556d-z7gld" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": EOF" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.685338 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.694455 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data-custom\") pod \"9b1540ce-a351-4090-bf54-e253994d9020\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.694597 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data\") pod \"9b1540ce-a351-4090-bf54-e253994d9020\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.694631 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-combined-ca-bundle\") pod \"9b1540ce-a351-4090-bf54-e253994d9020\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.694664 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-scripts\") pod \"9b1540ce-a351-4090-bf54-e253994d9020\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.694707 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2qhw\" (UniqueName: \"kubernetes.io/projected/9b1540ce-a351-4090-bf54-e253994d9020-kube-api-access-t2qhw\") pod \"9b1540ce-a351-4090-bf54-e253994d9020\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.694886 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b1540ce-a351-4090-bf54-e253994d9020-etc-machine-id\") pod \"9b1540ce-a351-4090-bf54-e253994d9020\" (UID: \"9b1540ce-a351-4090-bf54-e253994d9020\") " Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.695969 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b1540ce-a351-4090-bf54-e253994d9020-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9b1540ce-a351-4090-bf54-e253994d9020" (UID: "9b1540ce-a351-4090-bf54-e253994d9020"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.703084 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-scripts" (OuterVolumeSpecName: "scripts") pod "9b1540ce-a351-4090-bf54-e253994d9020" (UID: "9b1540ce-a351-4090-bf54-e253994d9020"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.708503 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b1540ce-a351-4090-bf54-e253994d9020-kube-api-access-t2qhw" (OuterVolumeSpecName: "kube-api-access-t2qhw") pod "9b1540ce-a351-4090-bf54-e253994d9020" (UID: "9b1540ce-a351-4090-bf54-e253994d9020"). InnerVolumeSpecName "kube-api-access-t2qhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.708701 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b1540ce-a351-4090-bf54-e253994d9020" (UID: "9b1540ce-a351-4090-bf54-e253994d9020"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.757743 4580 generic.go:334] "Generic (PLEG): container finished" podID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerID="4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099" exitCode=143 Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.757824 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-579c8f556d-z7gld" event={"ID":"fcddf9d2-2130-4f76-9318-373ba59d2f70","Type":"ContainerDied","Data":"4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099"} Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.758225 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b1540ce-a351-4090-bf54-e253994d9020" (UID: "9b1540ce-a351-4090-bf54-e253994d9020"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.760413 4580 generic.go:334] "Generic (PLEG): container finished" podID="9b1540ce-a351-4090-bf54-e253994d9020" containerID="6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e" exitCode=0 Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.760449 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b1540ce-a351-4090-bf54-e253994d9020","Type":"ContainerDied","Data":"6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e"} Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.760488 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9b1540ce-a351-4090-bf54-e253994d9020","Type":"ContainerDied","Data":"8ee8e30dad2538326d486a689e38ca38aea5847a0a8277a2500719f8c0b4f4c4"} Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.760511 4580 scope.go:117] "RemoveContainer" containerID="a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.760506 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.787309 4580 scope.go:117] "RemoveContainer" containerID="6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.794855 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data" (OuterVolumeSpecName: "config-data") pod "9b1540ce-a351-4090-bf54-e253994d9020" (UID: "9b1540ce-a351-4090-bf54-e253994d9020"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.799468 4580 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.799489 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.799497 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.799507 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b1540ce-a351-4090-bf54-e253994d9020-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.799515 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2qhw\" (UniqueName: \"kubernetes.io/projected/9b1540ce-a351-4090-bf54-e253994d9020-kube-api-access-t2qhw\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.799527 4580 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b1540ce-a351-4090-bf54-e253994d9020-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.807233 4580 scope.go:117] "RemoveContainer" containerID="a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd" Jan 12 13:22:28 crc kubenswrapper[4580]: E0112 13:22:28.808309 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd\": container with ID starting with a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd not found: ID does not exist" containerID="a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.808335 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd"} err="failed to get container status \"a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd\": rpc error: code = NotFound desc = could not find container \"a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd\": container with ID starting with a0212f020ebb7d2d0237a9098b998381b52b962bca231875713d695248e4c7cd not found: ID does not exist" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.808355 4580 scope.go:117] "RemoveContainer" containerID="6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e" Jan 12 13:22:28 crc kubenswrapper[4580]: E0112 13:22:28.808795 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e\": container with ID starting with 6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e not found: ID does not exist" containerID="6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e" Jan 12 13:22:28 crc kubenswrapper[4580]: I0112 13:22:28.808818 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e"} err="failed to get container status \"6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e\": rpc error: code = NotFound desc = could not find container \"6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e\": container with ID starting with 6780bf1d321e39a915fa17628f8a453551d58330c6fcb0842453becea9dd981e not found: ID does not exist" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.088633 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.093941 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.116617 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.116998 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117016 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117029 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1540ce-a351-4090-bf54-e253994d9020" containerName="probe" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117035 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1540ce-a351-4090-bf54-e253994d9020" containerName="probe" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117044 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b622df8-141e-468d-8f8d-86622f286566" containerName="neutron-api" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117050 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b622df8-141e-468d-8f8d-86622f286566" containerName="neutron-api" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117057 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="registry-server" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117062 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="registry-server" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117070 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab93f8e-1504-47d9-af38-197cbcc54feb" containerName="init" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117076 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab93f8e-1504-47d9-af38-197cbcc54feb" containerName="init" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117092 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="extract-utilities" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117118 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="extract-utilities" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117131 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117139 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117152 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117159 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117168 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b622df8-141e-468d-8f8d-86622f286566" containerName="neutron-httpd" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117175 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b622df8-141e-468d-8f8d-86622f286566" containerName="neutron-httpd" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117190 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1540ce-a351-4090-bf54-e253994d9020" containerName="cinder-scheduler" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117197 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1540ce-a351-4090-bf54-e253994d9020" containerName="cinder-scheduler" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117209 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117215 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117226 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117232 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117239 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="extract-content" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117246 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="extract-content" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117252 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab93f8e-1504-47d9-af38-197cbcc54feb" containerName="dnsmasq-dns" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117258 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab93f8e-1504-47d9-af38-197cbcc54feb" containerName="dnsmasq-dns" Jan 12 13:22:29 crc kubenswrapper[4580]: E0112 13:22:29.117269 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117275 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117432 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ecad68-8604-45ba-84e5-4a0aa1d7464a" containerName="registry-server" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117443 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1540ce-a351-4090-bf54-e253994d9020" containerName="cinder-scheduler" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117451 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117461 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b622df8-141e-468d-8f8d-86622f286566" containerName="neutron-httpd" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117469 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117481 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab93f8e-1504-47d9-af38-197cbcc54feb" containerName="dnsmasq-dns" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117491 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerName="horizon-log" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117498 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1540ce-a351-4090-bf54-e253994d9020" containerName="probe" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117508 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca18973-3724-49c5-b8c4-cf6beb66c288" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117517 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b622df8-141e-468d-8f8d-86622f286566" containerName="neutron-api" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117529 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d9d66d-ff92-4164-96a9-c82a919cce00" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.117540 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7a06f9-1a77-4a21-ac05-0c73655fa8d0" containerName="horizon" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.118515 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.120686 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.133177 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.205921 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f0d4cc9-9655-43d1-b588-ae5326765c36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.205989 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.206021 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.206035 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.206055 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8vwk\" (UniqueName: \"kubernetes.io/projected/0f0d4cc9-9655-43d1-b588-ae5326765c36-kube-api-access-k8vwk\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.206142 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.290784 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b1540ce-a351-4090-bf54-e253994d9020" path="/var/lib/kubelet/pods/9b1540ce-a351-4090-bf54-e253994d9020/volumes" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.307675 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f0d4cc9-9655-43d1-b588-ae5326765c36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.307722 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.307752 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.307769 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.307787 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8vwk\" (UniqueName: \"kubernetes.io/projected/0f0d4cc9-9655-43d1-b588-ae5326765c36-kube-api-access-k8vwk\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.307893 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.308307 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f0d4cc9-9655-43d1-b588-ae5326765c36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.313339 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.315158 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.315591 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.322561 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0d4cc9-9655-43d1-b588-ae5326765c36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.324938 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8vwk\" (UniqueName: \"kubernetes.io/projected/0f0d4cc9-9655-43d1-b588-ae5326765c36-kube-api-access-k8vwk\") pod \"cinder-scheduler-0\" (UID: \"0f0d4cc9-9655-43d1-b588-ae5326765c36\") " pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.431802 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.770052 4580 generic.go:334] "Generic (PLEG): container finished" podID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerID="59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260" exitCode=0 Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.770355 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b765ff94-66rkz" event={"ID":"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2","Type":"ContainerDied","Data":"59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260"} Jan 12 13:22:29 crc kubenswrapper[4580]: I0112 13:22:29.827612 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 12 13:22:29 crc kubenswrapper[4580]: W0112 13:22:29.831588 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f0d4cc9_9655_43d1_b588_ae5326765c36.slice/crio-eb8cd85677cf82f6e9977386707a99949cc28c3bcc52e12093c519c1460b0a60 WatchSource:0}: Error finding container eb8cd85677cf82f6e9977386707a99949cc28c3bcc52e12093c519c1460b0a60: Status 404 returned error can't find the container with id eb8cd85677cf82f6e9977386707a99949cc28c3bcc52e12093c519c1460b0a60 Jan 12 13:22:30 crc kubenswrapper[4580]: I0112 13:22:30.784838 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f0d4cc9-9655-43d1-b588-ae5326765c36","Type":"ContainerStarted","Data":"e59c60cf012c8dbd72c2bdcc36b56b103b3582f43b477c61d2da9afe7a996a5f"} Jan 12 13:22:30 crc kubenswrapper[4580]: I0112 13:22:30.785361 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f0d4cc9-9655-43d1-b588-ae5326765c36","Type":"ContainerStarted","Data":"eb8cd85677cf82f6e9977386707a99949cc28c3bcc52e12093c519c1460b0a60"} Jan 12 13:22:31 crc kubenswrapper[4580]: I0112 13:22:31.463574 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 12 13:22:31 crc kubenswrapper[4580]: I0112 13:22:31.542732 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54b765ff94-66rkz" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 12 13:22:31 crc kubenswrapper[4580]: I0112 13:22:31.792441 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f0d4cc9-9655-43d1-b588-ae5326765c36","Type":"ContainerStarted","Data":"193c3086c1a3d4fa24d33af00eafbe79bc99ebce8f96f31815d38180b8567c54"} Jan 12 13:22:31 crc kubenswrapper[4580]: I0112 13:22:31.809806 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.8097945380000002 podStartE2EDuration="2.809794538s" podCreationTimestamp="2026-01-12 13:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:22:31.809250045 +0000 UTC m=+950.853468734" watchObservedRunningTime="2026-01-12 13:22:31.809794538 +0000 UTC m=+950.854013228" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.170962 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-579c8f556d-z7gld" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:34052->10.217.0.161:9311: read: connection reset by peer" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.171021 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-579c8f556d-z7gld" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:34054->10.217.0.161:9311: read: connection reset by peer" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.594023 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.771037 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-combined-ca-bundle\") pod \"fcddf9d2-2130-4f76-9318-373ba59d2f70\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.771096 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwkhf\" (UniqueName: \"kubernetes.io/projected/fcddf9d2-2130-4f76-9318-373ba59d2f70-kube-api-access-jwkhf\") pod \"fcddf9d2-2130-4f76-9318-373ba59d2f70\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.771640 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data\") pod \"fcddf9d2-2130-4f76-9318-373ba59d2f70\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.771831 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data-custom\") pod \"fcddf9d2-2130-4f76-9318-373ba59d2f70\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.771872 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcddf9d2-2130-4f76-9318-373ba59d2f70-logs\") pod \"fcddf9d2-2130-4f76-9318-373ba59d2f70\" (UID: \"fcddf9d2-2130-4f76-9318-373ba59d2f70\") " Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.772329 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcddf9d2-2130-4f76-9318-373ba59d2f70-logs" (OuterVolumeSpecName: "logs") pod "fcddf9d2-2130-4f76-9318-373ba59d2f70" (UID: "fcddf9d2-2130-4f76-9318-373ba59d2f70"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.773094 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcddf9d2-2130-4f76-9318-373ba59d2f70-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.784208 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcddf9d2-2130-4f76-9318-373ba59d2f70-kube-api-access-jwkhf" (OuterVolumeSpecName: "kube-api-access-jwkhf") pod "fcddf9d2-2130-4f76-9318-373ba59d2f70" (UID: "fcddf9d2-2130-4f76-9318-373ba59d2f70"). InnerVolumeSpecName "kube-api-access-jwkhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.784247 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fcddf9d2-2130-4f76-9318-373ba59d2f70" (UID: "fcddf9d2-2130-4f76-9318-373ba59d2f70"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.796462 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcddf9d2-2130-4f76-9318-373ba59d2f70" (UID: "fcddf9d2-2130-4f76-9318-373ba59d2f70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.809027 4580 generic.go:334] "Generic (PLEG): container finished" podID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerID="97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607" exitCode=0 Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.809143 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-579c8f556d-z7gld" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.809158 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-579c8f556d-z7gld" event={"ID":"fcddf9d2-2130-4f76-9318-373ba59d2f70","Type":"ContainerDied","Data":"97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607"} Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.809702 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-579c8f556d-z7gld" event={"ID":"fcddf9d2-2130-4f76-9318-373ba59d2f70","Type":"ContainerDied","Data":"4dccd14653cc6bef47059eb9b6fed2cbb00668c8bb4a73c415e3e325b843ff96"} Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.809761 4580 scope.go:117] "RemoveContainer" containerID="97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.830580 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data" (OuterVolumeSpecName: "config-data") pod "fcddf9d2-2130-4f76-9318-373ba59d2f70" (UID: "fcddf9d2-2130-4f76-9318-373ba59d2f70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.874687 4580 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.874719 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.874731 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwkhf\" (UniqueName: \"kubernetes.io/projected/fcddf9d2-2130-4f76-9318-373ba59d2f70-kube-api-access-jwkhf\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.874752 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcddf9d2-2130-4f76-9318-373ba59d2f70-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.882508 4580 scope.go:117] "RemoveContainer" containerID="4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.910328 4580 scope.go:117] "RemoveContainer" containerID="97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607" Jan 12 13:22:32 crc kubenswrapper[4580]: E0112 13:22:32.910694 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607\": container with ID starting with 97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607 not found: ID does not exist" containerID="97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.910729 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607"} err="failed to get container status \"97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607\": rpc error: code = NotFound desc = could not find container \"97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607\": container with ID starting with 97c3d4a59048007a403d6af079e18033186e21ff95599f9dac4bded884dc5607 not found: ID does not exist" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.910754 4580 scope.go:117] "RemoveContainer" containerID="4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099" Jan 12 13:22:32 crc kubenswrapper[4580]: E0112 13:22:32.911037 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099\": container with ID starting with 4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099 not found: ID does not exist" containerID="4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099" Jan 12 13:22:32 crc kubenswrapper[4580]: I0112 13:22:32.911074 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099"} err="failed to get container status \"4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099\": rpc error: code = NotFound desc = could not find container \"4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099\": container with ID starting with 4a87d24a66760824c54dc0053a19af6dea792ec98f2b12e041aa64973bff2099 not found: ID does not exist" Jan 12 13:22:33 crc kubenswrapper[4580]: I0112 13:22:33.141710 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-579c8f556d-z7gld"] Jan 12 13:22:33 crc kubenswrapper[4580]: I0112 13:22:33.151724 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-579c8f556d-z7gld"] Jan 12 13:22:33 crc kubenswrapper[4580]: I0112 13:22:33.291510 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" path="/var/lib/kubelet/pods/fcddf9d2-2130-4f76-9318-373ba59d2f70/volumes" Jan 12 13:22:34 crc kubenswrapper[4580]: I0112 13:22:34.432266 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 12 13:22:35 crc kubenswrapper[4580]: I0112 13:22:35.603047 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-696c64b546-cw888" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.858217 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6f676d8c57-qq454"] Jan 12 13:22:37 crc kubenswrapper[4580]: E0112 13:22:37.858788 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api-log" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.858801 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api-log" Jan 12 13:22:37 crc kubenswrapper[4580]: E0112 13:22:37.858824 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.858830 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.858979 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.859000 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcddf9d2-2130-4f76-9318-373ba59d2f70" containerName="barbican-api-log" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.859752 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.861408 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.861510 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.861600 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.863820 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-combined-ca-bundle\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.864131 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-run-httpd\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.864167 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgwv\" (UniqueName: \"kubernetes.io/projected/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-kube-api-access-2vgwv\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.864249 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-public-tls-certs\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.864338 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-internal-tls-certs\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.864364 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-log-httpd\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.864385 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-config-data\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.864472 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-etc-swift\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.872173 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f676d8c57-qq454"] Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.965585 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-internal-tls-certs\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.965623 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-log-httpd\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.965645 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-config-data\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.965714 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-etc-swift\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.965743 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-combined-ca-bundle\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.965799 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-run-httpd\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.965823 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgwv\" (UniqueName: \"kubernetes.io/projected/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-kube-api-access-2vgwv\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.965867 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-public-tls-certs\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.966454 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-run-httpd\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.966697 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-log-httpd\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.971700 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-etc-swift\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.972159 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-internal-tls-certs\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.981149 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-combined-ca-bundle\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.983562 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-config-data\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.983797 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-public-tls-certs\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:37 crc kubenswrapper[4580]: I0112 13:22:37.984003 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgwv\" (UniqueName: \"kubernetes.io/projected/8d448ad1-2ef8-48cd-8e3c-3e81e82da286-kube-api-access-2vgwv\") pod \"swift-proxy-6f676d8c57-qq454\" (UID: \"8d448ad1-2ef8-48cd-8e3c-3e81e82da286\") " pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:38 crc kubenswrapper[4580]: I0112 13:22:38.173881 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:38 crc kubenswrapper[4580]: I0112 13:22:38.705302 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f676d8c57-qq454"] Jan 12 13:22:38 crc kubenswrapper[4580]: W0112 13:22:38.709375 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d448ad1_2ef8_48cd_8e3c_3e81e82da286.slice/crio-c1f04668ac190b352ea7db76ef92a7147177f692e8588c727c638ecf8d97ea5c WatchSource:0}: Error finding container c1f04668ac190b352ea7db76ef92a7147177f692e8588c727c638ecf8d97ea5c: Status 404 returned error can't find the container with id c1f04668ac190b352ea7db76ef92a7147177f692e8588c727c638ecf8d97ea5c Jan 12 13:22:38 crc kubenswrapper[4580]: I0112 13:22:38.869586 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f676d8c57-qq454" event={"ID":"8d448ad1-2ef8-48cd-8e3c-3e81e82da286","Type":"ContainerStarted","Data":"c1f04668ac190b352ea7db76ef92a7147177f692e8588c727c638ecf8d97ea5c"} Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.067019 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.067318 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="ceilometer-central-agent" containerID="cri-o://4c1d58120e1f3d3d4baf4906f7a26cd4683eeb4a261b7a9c8cf9987efb6b7006" gracePeriod=30 Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.067370 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="proxy-httpd" containerID="cri-o://f9260730ead159ba2e111b6438b922e355e0d082b854dbd8ab202f8cd763fe3f" gracePeriod=30 Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.067462 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="sg-core" containerID="cri-o://70aacb67e208b2728685ad433fa21a479aa6120d1249d45c19a4b7919e46d7ec" gracePeriod=30 Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.067509 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="ceilometer-notification-agent" containerID="cri-o://eecdc9c1445138491de857060fd761d45cbda00f3da822ae991700301a1fc4fa" gracePeriod=30 Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.171442 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": read tcp 10.217.0.2:43606->10.217.0.166:3000: read: connection reset by peer" Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.612714 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.878970 4580 generic.go:334] "Generic (PLEG): container finished" podID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerID="f9260730ead159ba2e111b6438b922e355e0d082b854dbd8ab202f8cd763fe3f" exitCode=0 Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.879014 4580 generic.go:334] "Generic (PLEG): container finished" podID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerID="70aacb67e208b2728685ad433fa21a479aa6120d1249d45c19a4b7919e46d7ec" exitCode=2 Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.879024 4580 generic.go:334] "Generic (PLEG): container finished" podID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerID="4c1d58120e1f3d3d4baf4906f7a26cd4683eeb4a261b7a9c8cf9987efb6b7006" exitCode=0 Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.879059 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerDied","Data":"f9260730ead159ba2e111b6438b922e355e0d082b854dbd8ab202f8cd763fe3f"} Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.879086 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerDied","Data":"70aacb67e208b2728685ad433fa21a479aa6120d1249d45c19a4b7919e46d7ec"} Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.879116 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerDied","Data":"4c1d58120e1f3d3d4baf4906f7a26cd4683eeb4a261b7a9c8cf9987efb6b7006"} Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.883311 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f676d8c57-qq454" event={"ID":"8d448ad1-2ef8-48cd-8e3c-3e81e82da286","Type":"ContainerStarted","Data":"321b907a081b5564b37f0a1dcbbf427b8fc8f0a1de06848686109680e641debd"} Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.883337 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f676d8c57-qq454" event={"ID":"8d448ad1-2ef8-48cd-8e3c-3e81e82da286","Type":"ContainerStarted","Data":"4d948ad3f6e0f975b068daf29e4a922b62323bc41483538e97f85b8b9f496c02"} Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.884584 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.884610 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:39 crc kubenswrapper[4580]: I0112 13:22:39.905479 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6f676d8c57-qq454" podStartSLOduration=2.905463133 podStartE2EDuration="2.905463133s" podCreationTimestamp="2026-01-12 13:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:22:39.899157072 +0000 UTC m=+958.943375762" watchObservedRunningTime="2026-01-12 13:22:39.905463133 +0000 UTC m=+958.949681822" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.302302 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.304289 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.306168 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.306368 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5wgzq" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.306625 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.315022 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d04e360b-50ce-4cb6-9168-b7592de2d83e-openstack-config\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.315116 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvn4\" (UniqueName: \"kubernetes.io/projected/d04e360b-50ce-4cb6-9168-b7592de2d83e-kube-api-access-4zvn4\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.315371 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d04e360b-50ce-4cb6-9168-b7592de2d83e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.315421 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d04e360b-50ce-4cb6-9168-b7592de2d83e-openstack-config-secret\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.316817 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.416510 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d04e360b-50ce-4cb6-9168-b7592de2d83e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.416547 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d04e360b-50ce-4cb6-9168-b7592de2d83e-openstack-config-secret\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.416573 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d04e360b-50ce-4cb6-9168-b7592de2d83e-openstack-config\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.416622 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvn4\" (UniqueName: \"kubernetes.io/projected/d04e360b-50ce-4cb6-9168-b7592de2d83e-kube-api-access-4zvn4\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.417503 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d04e360b-50ce-4cb6-9168-b7592de2d83e-openstack-config\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.426775 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d04e360b-50ce-4cb6-9168-b7592de2d83e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.437502 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d04e360b-50ce-4cb6-9168-b7592de2d83e-openstack-config-secret\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.449631 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvn4\" (UniqueName: \"kubernetes.io/projected/d04e360b-50ce-4cb6-9168-b7592de2d83e-kube-api-access-4zvn4\") pod \"openstackclient\" (UID: \"d04e360b-50ce-4cb6-9168-b7592de2d83e\") " pod="openstack/openstackclient" Jan 12 13:22:40 crc kubenswrapper[4580]: I0112 13:22:40.646360 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 12 13:22:41 crc kubenswrapper[4580]: I0112 13:22:41.069444 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 12 13:22:41 crc kubenswrapper[4580]: W0112 13:22:41.072487 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd04e360b_50ce_4cb6_9168_b7592de2d83e.slice/crio-31901a5e071a43450179b81ce31771d094d0b46a381f9c5c560c4081e113c708 WatchSource:0}: Error finding container 31901a5e071a43450179b81ce31771d094d0b46a381f9c5c560c4081e113c708: Status 404 returned error can't find the container with id 31901a5e071a43450179b81ce31771d094d0b46a381f9c5c560c4081e113c708 Jan 12 13:22:41 crc kubenswrapper[4580]: I0112 13:22:41.542672 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54b765ff94-66rkz" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 12 13:22:41 crc kubenswrapper[4580]: I0112 13:22:41.916735 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d04e360b-50ce-4cb6-9168-b7592de2d83e","Type":"ContainerStarted","Data":"31901a5e071a43450179b81ce31771d094d0b46a381f9c5c560c4081e113c708"} Jan 12 13:22:41 crc kubenswrapper[4580]: I0112 13:22:41.920237 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerDied","Data":"eecdc9c1445138491de857060fd761d45cbda00f3da822ae991700301a1fc4fa"} Jan 12 13:22:41 crc kubenswrapper[4580]: I0112 13:22:41.920143 4580 generic.go:334] "Generic (PLEG): container finished" podID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerID="eecdc9c1445138491de857060fd761d45cbda00f3da822ae991700301a1fc4fa" exitCode=0 Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.247364 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.356084 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-combined-ca-bundle\") pod \"063b2f8d-9ef8-4977-9071-ebb135ebc819\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.356218 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-run-httpd\") pod \"063b2f8d-9ef8-4977-9071-ebb135ebc819\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.356292 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5rn2\" (UniqueName: \"kubernetes.io/projected/063b2f8d-9ef8-4977-9071-ebb135ebc819-kube-api-access-q5rn2\") pod \"063b2f8d-9ef8-4977-9071-ebb135ebc819\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.356532 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-sg-core-conf-yaml\") pod \"063b2f8d-9ef8-4977-9071-ebb135ebc819\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.356731 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-config-data\") pod \"063b2f8d-9ef8-4977-9071-ebb135ebc819\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.357194 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "063b2f8d-9ef8-4977-9071-ebb135ebc819" (UID: "063b2f8d-9ef8-4977-9071-ebb135ebc819"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.357317 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-scripts\") pod \"063b2f8d-9ef8-4977-9071-ebb135ebc819\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.357374 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-log-httpd\") pod \"063b2f8d-9ef8-4977-9071-ebb135ebc819\" (UID: \"063b2f8d-9ef8-4977-9071-ebb135ebc819\") " Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.358153 4580 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.358878 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "063b2f8d-9ef8-4977-9071-ebb135ebc819" (UID: "063b2f8d-9ef8-4977-9071-ebb135ebc819"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.363076 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063b2f8d-9ef8-4977-9071-ebb135ebc819-kube-api-access-q5rn2" (OuterVolumeSpecName: "kube-api-access-q5rn2") pod "063b2f8d-9ef8-4977-9071-ebb135ebc819" (UID: "063b2f8d-9ef8-4977-9071-ebb135ebc819"). InnerVolumeSpecName "kube-api-access-q5rn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.366477 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-scripts" (OuterVolumeSpecName: "scripts") pod "063b2f8d-9ef8-4977-9071-ebb135ebc819" (UID: "063b2f8d-9ef8-4977-9071-ebb135ebc819"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.382890 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "063b2f8d-9ef8-4977-9071-ebb135ebc819" (UID: "063b2f8d-9ef8-4977-9071-ebb135ebc819"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.416688 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "063b2f8d-9ef8-4977-9071-ebb135ebc819" (UID: "063b2f8d-9ef8-4977-9071-ebb135ebc819"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.432586 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-config-data" (OuterVolumeSpecName: "config-data") pod "063b2f8d-9ef8-4977-9071-ebb135ebc819" (UID: "063b2f8d-9ef8-4977-9071-ebb135ebc819"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.460038 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.460072 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.460083 4580 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/063b2f8d-9ef8-4977-9071-ebb135ebc819-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.460094 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.460120 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5rn2\" (UniqueName: \"kubernetes.io/projected/063b2f8d-9ef8-4977-9071-ebb135ebc819-kube-api-access-q5rn2\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.460131 4580 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/063b2f8d-9ef8-4977-9071-ebb135ebc819-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.935173 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"063b2f8d-9ef8-4977-9071-ebb135ebc819","Type":"ContainerDied","Data":"9e721ccd675b2b96743868e560ef5696509e8af9effd3aad609d70109e771730"} Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.935232 4580 scope.go:117] "RemoveContainer" containerID="f9260730ead159ba2e111b6438b922e355e0d082b854dbd8ab202f8cd763fe3f" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.935380 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.972928 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:42 crc kubenswrapper[4580]: I0112 13:22:42.983606 4580 scope.go:117] "RemoveContainer" containerID="70aacb67e208b2728685ad433fa21a479aa6120d1249d45c19a4b7919e46d7ec" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.014156 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.029526 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:43 crc kubenswrapper[4580]: E0112 13:22:43.030535 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="ceilometer-notification-agent" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.030609 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="ceilometer-notification-agent" Jan 12 13:22:43 crc kubenswrapper[4580]: E0112 13:22:43.031534 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="ceilometer-central-agent" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.031588 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="ceilometer-central-agent" Jan 12 13:22:43 crc kubenswrapper[4580]: E0112 13:22:43.031602 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="sg-core" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.031610 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="sg-core" Jan 12 13:22:43 crc kubenswrapper[4580]: E0112 13:22:43.031635 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="proxy-httpd" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.031643 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="proxy-httpd" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.031883 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="ceilometer-central-agent" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.031901 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="sg-core" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.031913 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="proxy-httpd" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.031925 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" containerName="ceilometer-notification-agent" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.040892 4580 scope.go:117] "RemoveContainer" containerID="eecdc9c1445138491de857060fd761d45cbda00f3da822ae991700301a1fc4fa" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.045680 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.045800 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.047971 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.048395 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.070572 4580 scope.go:117] "RemoveContainer" containerID="4c1d58120e1f3d3d4baf4906f7a26cd4683eeb4a261b7a9c8cf9987efb6b7006" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.073252 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-run-httpd\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.073291 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.073329 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.073483 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhwfj\" (UniqueName: \"kubernetes.io/projected/0051f4d8-f22c-4646-bdc6-fdb92959dd30-kube-api-access-fhwfj\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.073529 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-config-data\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.073597 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-log-httpd\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.073648 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-scripts\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.174740 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhwfj\" (UniqueName: \"kubernetes.io/projected/0051f4d8-f22c-4646-bdc6-fdb92959dd30-kube-api-access-fhwfj\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.174821 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-config-data\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.174864 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-log-httpd\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.174948 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-scripts\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.175019 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-run-httpd\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.175035 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.175054 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.175367 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-log-httpd\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.176009 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-run-httpd\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.182347 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.182643 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-config-data\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.183199 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.192017 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-scripts\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.195398 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.195973 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhwfj\" (UniqueName: \"kubernetes.io/projected/0051f4d8-f22c-4646-bdc6-fdb92959dd30-kube-api-access-fhwfj\") pod \"ceilometer-0\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.291633 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063b2f8d-9ef8-4977-9071-ebb135ebc819" path="/var/lib/kubelet/pods/063b2f8d-9ef8-4977-9071-ebb135ebc819/volumes" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.361808 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.919604 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:43 crc kubenswrapper[4580]: I0112 13:22:43.949639 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerStarted","Data":"451b0b1890977fbad707ad55deb5f35ea39ab86133ceda904df0f47e77f5b1bf"} Jan 12 13:22:44 crc kubenswrapper[4580]: I0112 13:22:44.959543 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerStarted","Data":"ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b"} Jan 12 13:22:45 crc kubenswrapper[4580]: I0112 13:22:45.666059 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:45 crc kubenswrapper[4580]: I0112 13:22:45.968603 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerStarted","Data":"7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285"} Jan 12 13:22:46 crc kubenswrapper[4580]: I0112 13:22:46.981863 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerStarted","Data":"25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627"} Jan 12 13:22:48 crc kubenswrapper[4580]: I0112 13:22:48.182677 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f676d8c57-qq454" Jan 12 13:22:51 crc kubenswrapper[4580]: I0112 13:22:51.543534 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54b765ff94-66rkz" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 12 13:22:51 crc kubenswrapper[4580]: I0112 13:22:51.544275 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:22:52 crc kubenswrapper[4580]: I0112 13:22:52.034683 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d04e360b-50ce-4cb6-9168-b7592de2d83e","Type":"ContainerStarted","Data":"30a43c03313ae7f3ea98ccd9d9cec3732e50b722a8088556aca19fdff445bb93"} Jan 12 13:22:52 crc kubenswrapper[4580]: I0112 13:22:52.052679 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.889181342 podStartE2EDuration="12.052663603s" podCreationTimestamp="2026-01-12 13:22:40 +0000 UTC" firstStartedPulling="2026-01-12 13:22:41.074962503 +0000 UTC m=+960.119181193" lastFinishedPulling="2026-01-12 13:22:51.238444764 +0000 UTC m=+970.282663454" observedRunningTime="2026-01-12 13:22:52.04569198 +0000 UTC m=+971.089910670" watchObservedRunningTime="2026-01-12 13:22:52.052663603 +0000 UTC m=+971.096882292" Jan 12 13:22:53 crc kubenswrapper[4580]: I0112 13:22:53.046417 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerStarted","Data":"7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e"} Jan 12 13:22:53 crc kubenswrapper[4580]: I0112 13:22:53.046755 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="sg-core" containerID="cri-o://25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627" gracePeriod=30 Jan 12 13:22:53 crc kubenswrapper[4580]: I0112 13:22:53.046736 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="ceilometer-central-agent" containerID="cri-o://ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b" gracePeriod=30 Jan 12 13:22:53 crc kubenswrapper[4580]: I0112 13:22:53.046755 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="proxy-httpd" containerID="cri-o://7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e" gracePeriod=30 Jan 12 13:22:53 crc kubenswrapper[4580]: I0112 13:22:53.046805 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="ceilometer-notification-agent" containerID="cri-o://7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285" gracePeriod=30 Jan 12 13:22:53 crc kubenswrapper[4580]: I0112 13:22:53.078384 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.809441557 podStartE2EDuration="11.07836344s" podCreationTimestamp="2026-01-12 13:22:42 +0000 UTC" firstStartedPulling="2026-01-12 13:22:43.932930096 +0000 UTC m=+962.977148787" lastFinishedPulling="2026-01-12 13:22:52.20185198 +0000 UTC m=+971.246070670" observedRunningTime="2026-01-12 13:22:53.071237928 +0000 UTC m=+972.115456618" watchObservedRunningTime="2026-01-12 13:22:53.07836344 +0000 UTC m=+972.122582130" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.056076 4580 generic.go:334] "Generic (PLEG): container finished" podID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerID="7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e" exitCode=0 Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.056371 4580 generic.go:334] "Generic (PLEG): container finished" podID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerID="25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627" exitCode=2 Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.056381 4580 generic.go:334] "Generic (PLEG): container finished" podID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerID="ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b" exitCode=0 Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.056137 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerDied","Data":"7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e"} Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.056417 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerDied","Data":"25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627"} Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.056431 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerDied","Data":"ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b"} Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.462989 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.504606 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhwfj\" (UniqueName: \"kubernetes.io/projected/0051f4d8-f22c-4646-bdc6-fdb92959dd30-kube-api-access-fhwfj\") pod \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.504667 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-config-data\") pod \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.504700 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-sg-core-conf-yaml\") pod \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.504758 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-combined-ca-bundle\") pod \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.504821 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-log-httpd\") pod \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.504881 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-run-httpd\") pod \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.504904 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-scripts\") pod \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\" (UID: \"0051f4d8-f22c-4646-bdc6-fdb92959dd30\") " Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.505466 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0051f4d8-f22c-4646-bdc6-fdb92959dd30" (UID: "0051f4d8-f22c-4646-bdc6-fdb92959dd30"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.505645 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0051f4d8-f22c-4646-bdc6-fdb92959dd30" (UID: "0051f4d8-f22c-4646-bdc6-fdb92959dd30"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.519638 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-scripts" (OuterVolumeSpecName: "scripts") pod "0051f4d8-f22c-4646-bdc6-fdb92959dd30" (UID: "0051f4d8-f22c-4646-bdc6-fdb92959dd30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.524327 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0051f4d8-f22c-4646-bdc6-fdb92959dd30-kube-api-access-fhwfj" (OuterVolumeSpecName: "kube-api-access-fhwfj") pod "0051f4d8-f22c-4646-bdc6-fdb92959dd30" (UID: "0051f4d8-f22c-4646-bdc6-fdb92959dd30"). InnerVolumeSpecName "kube-api-access-fhwfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.529860 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0051f4d8-f22c-4646-bdc6-fdb92959dd30" (UID: "0051f4d8-f22c-4646-bdc6-fdb92959dd30"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.587838 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0051f4d8-f22c-4646-bdc6-fdb92959dd30" (UID: "0051f4d8-f22c-4646-bdc6-fdb92959dd30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.607275 4580 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.607306 4580 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0051f4d8-f22c-4646-bdc6-fdb92959dd30-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.607317 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.607329 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhwfj\" (UniqueName: \"kubernetes.io/projected/0051f4d8-f22c-4646-bdc6-fdb92959dd30-kube-api-access-fhwfj\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.607341 4580 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.607349 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.617322 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-config-data" (OuterVolumeSpecName: "config-data") pod "0051f4d8-f22c-4646-bdc6-fdb92959dd30" (UID: "0051f4d8-f22c-4646-bdc6-fdb92959dd30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:54 crc kubenswrapper[4580]: I0112 13:22:54.709325 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051f4d8-f22c-4646-bdc6-fdb92959dd30-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.066277 4580 generic.go:334] "Generic (PLEG): container finished" podID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerID="7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285" exitCode=0 Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.066341 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.067317 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerDied","Data":"7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285"} Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.067868 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0051f4d8-f22c-4646-bdc6-fdb92959dd30","Type":"ContainerDied","Data":"451b0b1890977fbad707ad55deb5f35ea39ab86133ceda904df0f47e77f5b1bf"} Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.067973 4580 scope.go:117] "RemoveContainer" containerID="7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.092422 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.099160 4580 scope.go:117] "RemoveContainer" containerID="25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.100212 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.122338 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:55 crc kubenswrapper[4580]: E0112 13:22:55.122756 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="sg-core" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.122776 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="sg-core" Jan 12 13:22:55 crc kubenswrapper[4580]: E0112 13:22:55.122790 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="proxy-httpd" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.122797 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="proxy-httpd" Jan 12 13:22:55 crc kubenswrapper[4580]: E0112 13:22:55.122814 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="ceilometer-notification-agent" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.122819 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="ceilometer-notification-agent" Jan 12 13:22:55 crc kubenswrapper[4580]: E0112 13:22:55.122848 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="ceilometer-central-agent" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.122853 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="ceilometer-central-agent" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.123002 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="ceilometer-central-agent" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.123022 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="ceilometer-notification-agent" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.123035 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="sg-core" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.123047 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" containerName="proxy-httpd" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.124540 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.132612 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.132888 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.135592 4580 scope.go:117] "RemoveContainer" containerID="7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.139562 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.156838 4580 scope.go:117] "RemoveContainer" containerID="ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.171329 4580 scope.go:117] "RemoveContainer" containerID="7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e" Jan 12 13:22:55 crc kubenswrapper[4580]: E0112 13:22:55.171630 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e\": container with ID starting with 7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e not found: ID does not exist" containerID="7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.171674 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e"} err="failed to get container status \"7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e\": rpc error: code = NotFound desc = could not find container \"7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e\": container with ID starting with 7be8082c4883a502379a66e5bd7b54f1895e29828c555d247d495e92df65bd0e not found: ID does not exist" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.171702 4580 scope.go:117] "RemoveContainer" containerID="25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627" Jan 12 13:22:55 crc kubenswrapper[4580]: E0112 13:22:55.172243 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627\": container with ID starting with 25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627 not found: ID does not exist" containerID="25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.172287 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627"} err="failed to get container status \"25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627\": rpc error: code = NotFound desc = could not find container \"25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627\": container with ID starting with 25df6bf75f269f78d300dad7e5e17380d85c73b6a43af622e91e569fae09a627 not found: ID does not exist" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.172341 4580 scope.go:117] "RemoveContainer" containerID="7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285" Jan 12 13:22:55 crc kubenswrapper[4580]: E0112 13:22:55.172815 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285\": container with ID starting with 7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285 not found: ID does not exist" containerID="7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.172850 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285"} err="failed to get container status \"7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285\": rpc error: code = NotFound desc = could not find container \"7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285\": container with ID starting with 7da2f678506a5faebab6aae86361336cf22196632ca817b293d7e55857ee6285 not found: ID does not exist" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.172869 4580 scope.go:117] "RemoveContainer" containerID="ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b" Jan 12 13:22:55 crc kubenswrapper[4580]: E0112 13:22:55.174634 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b\": container with ID starting with ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b not found: ID does not exist" containerID="ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.174666 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b"} err="failed to get container status \"ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b\": rpc error: code = NotFound desc = could not find container \"ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b\": container with ID starting with ae11946bbf7c94fadeedaf4cb94da802b1d1c210423558d4198b2cb28b5d5a8b not found: ID does not exist" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.218300 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-log-httpd\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.218350 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-config-data\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.218373 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-run-httpd\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.218441 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.218481 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.218502 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-scripts\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.218552 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl47g\" (UniqueName: \"kubernetes.io/projected/46db66c5-efa1-4357-8eef-140731c74ef0-kube-api-access-vl47g\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.290866 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0051f4d8-f22c-4646-bdc6-fdb92959dd30" path="/var/lib/kubelet/pods/0051f4d8-f22c-4646-bdc6-fdb92959dd30/volumes" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320074 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-log-httpd\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320145 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-config-data\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320168 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-run-httpd\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320203 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320246 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320268 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-scripts\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320315 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl47g\" (UniqueName: \"kubernetes.io/projected/46db66c5-efa1-4357-8eef-140731c74ef0-kube-api-access-vl47g\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320516 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-log-httpd\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.320591 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-run-httpd\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.323737 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-scripts\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.323902 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.324078 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.326848 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-config-data\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.333703 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl47g\" (UniqueName: \"kubernetes.io/projected/46db66c5-efa1-4357-8eef-140731c74ef0-kube-api-access-vl47g\") pod \"ceilometer-0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.443675 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.866079 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:55 crc kubenswrapper[4580]: I0112 13:22:55.991760 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.064260 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.073667 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerStarted","Data":"0d250e569de9c18740539a522d8e0c0ebe9c218b288c101dd789a89a7f43ce1b"} Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.076923 4580 generic.go:334] "Generic (PLEG): container finished" podID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerID="0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7" exitCode=137 Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.076969 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54b765ff94-66rkz" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.076975 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b765ff94-66rkz" event={"ID":"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2","Type":"ContainerDied","Data":"0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7"} Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.077021 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54b765ff94-66rkz" event={"ID":"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2","Type":"ContainerDied","Data":"97250f27354293c75ca6f9dd47aebe6786d6324879bc680e419e57c35f618d5f"} Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.077054 4580 scope.go:117] "RemoveContainer" containerID="59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.219063 4580 scope.go:117] "RemoveContainer" containerID="0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.234792 4580 scope.go:117] "RemoveContainer" containerID="59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260" Jan 12 13:22:56 crc kubenswrapper[4580]: E0112 13:22:56.235158 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260\": container with ID starting with 59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260 not found: ID does not exist" containerID="59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.235205 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260"} err="failed to get container status \"59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260\": rpc error: code = NotFound desc = could not find container \"59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260\": container with ID starting with 59195d2de454fd2603098fbc1fcc86559032303c06579ca821d3e24b04357260 not found: ID does not exist" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.235236 4580 scope.go:117] "RemoveContainer" containerID="0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7" Jan 12 13:22:56 crc kubenswrapper[4580]: E0112 13:22:56.235670 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7\": container with ID starting with 0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7 not found: ID does not exist" containerID="0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.235695 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7"} err="failed to get container status \"0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7\": rpc error: code = NotFound desc = could not find container \"0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7\": container with ID starting with 0617e22c043b8c6ed1bfde2bc79332362cf2b73628cab3f3c05f7003eb945ac7 not found: ID does not exist" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.239141 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-tls-certs\") pod \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.239396 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-scripts\") pod \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.239425 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-logs\") pod \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.239481 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-combined-ca-bundle\") pod \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.239509 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-config-data\") pod \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.239536 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r95ks\" (UniqueName: \"kubernetes.io/projected/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-kube-api-access-r95ks\") pod \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.239598 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-secret-key\") pod \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\" (UID: \"11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2\") " Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.240066 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-logs" (OuterVolumeSpecName: "logs") pod "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" (UID: "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.246633 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" (UID: "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.247792 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-kube-api-access-r95ks" (OuterVolumeSpecName: "kube-api-access-r95ks") pod "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" (UID: "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2"). InnerVolumeSpecName "kube-api-access-r95ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.263343 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-config-data" (OuterVolumeSpecName: "config-data") pod "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" (UID: "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.263613 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" (UID: "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.269325 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-scripts" (OuterVolumeSpecName: "scripts") pod "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" (UID: "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.286263 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" (UID: "11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.342830 4580 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.342865 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.342878 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.342889 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.342899 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.343062 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r95ks\" (UniqueName: \"kubernetes.io/projected/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-kube-api-access-r95ks\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.343088 4580 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.409764 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54b765ff94-66rkz"] Jan 12 13:22:56 crc kubenswrapper[4580]: I0112 13:22:56.414981 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54b765ff94-66rkz"] Jan 12 13:22:57 crc kubenswrapper[4580]: I0112 13:22:57.090380 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerStarted","Data":"57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228"} Jan 12 13:22:57 crc kubenswrapper[4580]: I0112 13:22:57.293309 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" path="/var/lib/kubelet/pods/11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2/volumes" Jan 12 13:22:58 crc kubenswrapper[4580]: I0112 13:22:58.102681 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerStarted","Data":"1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3"} Jan 12 13:22:59 crc kubenswrapper[4580]: I0112 13:22:59.117039 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerStarted","Data":"8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d"} Jan 12 13:22:59 crc kubenswrapper[4580]: I0112 13:22:59.209911 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:22:59 crc kubenswrapper[4580]: I0112 13:22:59.210238 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerName="glance-log" containerID="cri-o://70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142" gracePeriod=30 Jan 12 13:22:59 crc kubenswrapper[4580]: I0112 13:22:59.210378 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerName="glance-httpd" containerID="cri-o://0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2" gracePeriod=30 Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.070638 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.071155 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" containerName="glance-log" containerID="cri-o://cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0" gracePeriod=30 Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.071263 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" containerName="glance-httpd" containerID="cri-o://370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd" gracePeriod=30 Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.128510 4580 generic.go:334] "Generic (PLEG): container finished" podID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerID="70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142" exitCode=143 Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.128805 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc934c6e-8cf2-42f0-97bc-22537818cd51","Type":"ContainerDied","Data":"70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142"} Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.131182 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerStarted","Data":"e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2"} Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.131319 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="ceilometer-central-agent" containerID="cri-o://57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228" gracePeriod=30 Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.131567 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.131765 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="proxy-httpd" containerID="cri-o://e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2" gracePeriod=30 Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.131815 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="sg-core" containerID="cri-o://8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d" gracePeriod=30 Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.131849 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="ceilometer-notification-agent" containerID="cri-o://1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3" gracePeriod=30 Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.153487 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.709105685 podStartE2EDuration="5.153475126s" podCreationTimestamp="2026-01-12 13:22:55 +0000 UTC" firstStartedPulling="2026-01-12 13:22:55.867489244 +0000 UTC m=+974.911707934" lastFinishedPulling="2026-01-12 13:22:59.311858684 +0000 UTC m=+978.356077375" observedRunningTime="2026-01-12 13:23:00.151249241 +0000 UTC m=+979.195467931" watchObservedRunningTime="2026-01-12 13:23:00.153475126 +0000 UTC m=+979.197693815" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.219896 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6wrc4"] Jan 12 13:23:00 crc kubenswrapper[4580]: E0112 13:23:00.220224 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.220240 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" Jan 12 13:23:00 crc kubenswrapper[4580]: E0112 13:23:00.220266 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon-log" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.220272 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon-log" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.220413 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.220434 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="11466c76-bd4e-4b1f-b4f5-74da7e2a9ca2" containerName="horizon-log" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.220946 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.225824 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9pb\" (UniqueName: \"kubernetes.io/projected/feda8d3e-afea-4925-b154-4f13512cae76-kube-api-access-mp9pb\") pod \"nova-api-db-create-6wrc4\" (UID: \"feda8d3e-afea-4925-b154-4f13512cae76\") " pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.225975 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feda8d3e-afea-4925-b154-4f13512cae76-operator-scripts\") pod \"nova-api-db-create-6wrc4\" (UID: \"feda8d3e-afea-4925-b154-4f13512cae76\") " pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.238938 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6wrc4"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.324380 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kzsjf"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.325657 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.327493 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-operator-scripts\") pod \"nova-cell0-db-create-kzsjf\" (UID: \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\") " pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.327549 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp9pb\" (UniqueName: \"kubernetes.io/projected/feda8d3e-afea-4925-b154-4f13512cae76-kube-api-access-mp9pb\") pod \"nova-api-db-create-6wrc4\" (UID: \"feda8d3e-afea-4925-b154-4f13512cae76\") " pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.327598 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58sxt\" (UniqueName: \"kubernetes.io/projected/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-kube-api-access-58sxt\") pod \"nova-cell0-db-create-kzsjf\" (UID: \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\") " pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.327641 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feda8d3e-afea-4925-b154-4f13512cae76-operator-scripts\") pod \"nova-api-db-create-6wrc4\" (UID: \"feda8d3e-afea-4925-b154-4f13512cae76\") " pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.328470 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feda8d3e-afea-4925-b154-4f13512cae76-operator-scripts\") pod \"nova-api-db-create-6wrc4\" (UID: \"feda8d3e-afea-4925-b154-4f13512cae76\") " pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.334640 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1d39-account-create-update-dd8gr"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.335499 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.337464 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.359935 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp9pb\" (UniqueName: \"kubernetes.io/projected/feda8d3e-afea-4925-b154-4f13512cae76-kube-api-access-mp9pb\") pod \"nova-api-db-create-6wrc4\" (UID: \"feda8d3e-afea-4925-b154-4f13512cae76\") " pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.380307 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kzsjf"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.390158 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1d39-account-create-update-dd8gr"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.425546 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-gdmnx"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.426733 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.429610 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-operator-scripts\") pod \"nova-cell0-db-create-kzsjf\" (UID: \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\") " pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.429783 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58sxt\" (UniqueName: \"kubernetes.io/projected/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-kube-api-access-58sxt\") pod \"nova-cell0-db-create-kzsjf\" (UID: \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\") " pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.432137 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gdmnx"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.432514 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-operator-scripts\") pod \"nova-cell0-db-create-kzsjf\" (UID: \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\") " pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.449608 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58sxt\" (UniqueName: \"kubernetes.io/projected/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-kube-api-access-58sxt\") pod \"nova-cell0-db-create-kzsjf\" (UID: \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\") " pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.532197 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2b89-account-create-update-844zz"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.533662 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblmh\" (UniqueName: \"kubernetes.io/projected/caab40d7-1666-4a92-ba59-a9241ce91657-kube-api-access-qblmh\") pod \"nova-cell1-db-create-gdmnx\" (UID: \"caab40d7-1666-4a92-ba59-a9241ce91657\") " pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.533731 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8vx\" (UniqueName: \"kubernetes.io/projected/a5680228-c2e3-437e-b28e-bfd73513f81d-kube-api-access-pc8vx\") pod \"nova-api-1d39-account-create-update-dd8gr\" (UID: \"a5680228-c2e3-437e-b28e-bfd73513f81d\") " pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.533785 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5680228-c2e3-437e-b28e-bfd73513f81d-operator-scripts\") pod \"nova-api-1d39-account-create-update-dd8gr\" (UID: \"a5680228-c2e3-437e-b28e-bfd73513f81d\") " pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.533893 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caab40d7-1666-4a92-ba59-a9241ce91657-operator-scripts\") pod \"nova-cell1-db-create-gdmnx\" (UID: \"caab40d7-1666-4a92-ba59-a9241ce91657\") " pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.534769 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.539934 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.541293 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.544373 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2b89-account-create-update-844zz"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.636049 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9631d5-0489-43f6-a144-5869fc41f5ba-operator-scripts\") pod \"nova-cell0-2b89-account-create-update-844zz\" (UID: \"8c9631d5-0489-43f6-a144-5869fc41f5ba\") " pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.636166 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caab40d7-1666-4a92-ba59-a9241ce91657-operator-scripts\") pod \"nova-cell1-db-create-gdmnx\" (UID: \"caab40d7-1666-4a92-ba59-a9241ce91657\") " pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.636258 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qblmh\" (UniqueName: \"kubernetes.io/projected/caab40d7-1666-4a92-ba59-a9241ce91657-kube-api-access-qblmh\") pod \"nova-cell1-db-create-gdmnx\" (UID: \"caab40d7-1666-4a92-ba59-a9241ce91657\") " pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.636281 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhldx\" (UniqueName: \"kubernetes.io/projected/8c9631d5-0489-43f6-a144-5869fc41f5ba-kube-api-access-rhldx\") pod \"nova-cell0-2b89-account-create-update-844zz\" (UID: \"8c9631d5-0489-43f6-a144-5869fc41f5ba\") " pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.636319 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc8vx\" (UniqueName: \"kubernetes.io/projected/a5680228-c2e3-437e-b28e-bfd73513f81d-kube-api-access-pc8vx\") pod \"nova-api-1d39-account-create-update-dd8gr\" (UID: \"a5680228-c2e3-437e-b28e-bfd73513f81d\") " pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.636353 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5680228-c2e3-437e-b28e-bfd73513f81d-operator-scripts\") pod \"nova-api-1d39-account-create-update-dd8gr\" (UID: \"a5680228-c2e3-437e-b28e-bfd73513f81d\") " pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.639044 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5680228-c2e3-437e-b28e-bfd73513f81d-operator-scripts\") pod \"nova-api-1d39-account-create-update-dd8gr\" (UID: \"a5680228-c2e3-437e-b28e-bfd73513f81d\") " pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.639068 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caab40d7-1666-4a92-ba59-a9241ce91657-operator-scripts\") pod \"nova-cell1-db-create-gdmnx\" (UID: \"caab40d7-1666-4a92-ba59-a9241ce91657\") " pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.644837 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.653661 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc8vx\" (UniqueName: \"kubernetes.io/projected/a5680228-c2e3-437e-b28e-bfd73513f81d-kube-api-access-pc8vx\") pod \"nova-api-1d39-account-create-update-dd8gr\" (UID: \"a5680228-c2e3-437e-b28e-bfd73513f81d\") " pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.653749 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qblmh\" (UniqueName: \"kubernetes.io/projected/caab40d7-1666-4a92-ba59-a9241ce91657-kube-api-access-qblmh\") pod \"nova-cell1-db-create-gdmnx\" (UID: \"caab40d7-1666-4a92-ba59-a9241ce91657\") " pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.726851 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.732398 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c19d-account-create-update-bz5j6"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.739227 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.748918 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.776085 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578f8\" (UniqueName: \"kubernetes.io/projected/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-kube-api-access-578f8\") pod \"nova-cell1-c19d-account-create-update-bz5j6\" (UID: \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\") " pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.776220 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-operator-scripts\") pod \"nova-cell1-c19d-account-create-update-bz5j6\" (UID: \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\") " pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.776280 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhldx\" (UniqueName: \"kubernetes.io/projected/8c9631d5-0489-43f6-a144-5869fc41f5ba-kube-api-access-rhldx\") pod \"nova-cell0-2b89-account-create-update-844zz\" (UID: \"8c9631d5-0489-43f6-a144-5869fc41f5ba\") " pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.776357 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9631d5-0489-43f6-a144-5869fc41f5ba-operator-scripts\") pod \"nova-cell0-2b89-account-create-update-844zz\" (UID: \"8c9631d5-0489-43f6-a144-5869fc41f5ba\") " pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.779430 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9631d5-0489-43f6-a144-5869fc41f5ba-operator-scripts\") pod \"nova-cell0-2b89-account-create-update-844zz\" (UID: \"8c9631d5-0489-43f6-a144-5869fc41f5ba\") " pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.803361 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhldx\" (UniqueName: \"kubernetes.io/projected/8c9631d5-0489-43f6-a144-5869fc41f5ba-kube-api-access-rhldx\") pod \"nova-cell0-2b89-account-create-update-844zz\" (UID: \"8c9631d5-0489-43f6-a144-5869fc41f5ba\") " pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.813662 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.817950 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c19d-account-create-update-bz5j6"] Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.862093 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.880709 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578f8\" (UniqueName: \"kubernetes.io/projected/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-kube-api-access-578f8\") pod \"nova-cell1-c19d-account-create-update-bz5j6\" (UID: \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\") " pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.880758 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-operator-scripts\") pod \"nova-cell1-c19d-account-create-update-bz5j6\" (UID: \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\") " pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.881528 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-operator-scripts\") pod \"nova-cell1-c19d-account-create-update-bz5j6\" (UID: \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\") " pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.912389 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578f8\" (UniqueName: \"kubernetes.io/projected/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-kube-api-access-578f8\") pod \"nova-cell1-c19d-account-create-update-bz5j6\" (UID: \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\") " pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:00 crc kubenswrapper[4580]: I0112 13:23:00.974626 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6wrc4"] Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.074998 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.111685 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kzsjf"] Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.146454 4580 generic.go:334] "Generic (PLEG): container finished" podID="46db66c5-efa1-4357-8eef-140731c74ef0" containerID="e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2" exitCode=0 Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.146482 4580 generic.go:334] "Generic (PLEG): container finished" podID="46db66c5-efa1-4357-8eef-140731c74ef0" containerID="8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d" exitCode=2 Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.146491 4580 generic.go:334] "Generic (PLEG): container finished" podID="46db66c5-efa1-4357-8eef-140731c74ef0" containerID="1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3" exitCode=0 Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.146496 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerDied","Data":"e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2"} Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.146971 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerDied","Data":"8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d"} Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.146986 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerDied","Data":"1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3"} Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.148836 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6wrc4" event={"ID":"feda8d3e-afea-4925-b154-4f13512cae76","Type":"ContainerStarted","Data":"40fe63e8f11caa72804aa1e01ef044870c4f667937d11f5f5466013f348537d5"} Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.152596 4580 generic.go:334] "Generic (PLEG): container finished" podID="52d5c384-ad20-413e-a8ec-183b114d9901" containerID="cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0" exitCode=143 Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.152651 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d5c384-ad20-413e-a8ec-183b114d9901","Type":"ContainerDied","Data":"cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0"} Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.156556 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kzsjf" event={"ID":"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0","Type":"ContainerStarted","Data":"a7a6e1487f1b8a4757d3b50c64b2d120589975a4569706083a68fcfb110792ac"} Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.278886 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1d39-account-create-update-dd8gr"] Jan 12 13:23:01 crc kubenswrapper[4580]: W0112 13:23:01.303159 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5680228_c2e3_437e_b28e_bfd73513f81d.slice/crio-07413b40b4ac99f413fc59876e19f318404efe56d72eb04cfa92fc16a309c435 WatchSource:0}: Error finding container 07413b40b4ac99f413fc59876e19f318404efe56d72eb04cfa92fc16a309c435: Status 404 returned error can't find the container with id 07413b40b4ac99f413fc59876e19f318404efe56d72eb04cfa92fc16a309c435 Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.351502 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gdmnx"] Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.452367 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2b89-account-create-update-844zz"] Jan 12 13:23:01 crc kubenswrapper[4580]: W0112 13:23:01.495711 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c9631d5_0489_43f6_a144_5869fc41f5ba.slice/crio-b54fa94e471e4af7e668512ec71947a642b4d27aeaa423a4d1ec9c004c3109a6 WatchSource:0}: Error finding container b54fa94e471e4af7e668512ec71947a642b4d27aeaa423a4d1ec9c004c3109a6: Status 404 returned error can't find the container with id b54fa94e471e4af7e668512ec71947a642b4d27aeaa423a4d1ec9c004c3109a6 Jan 12 13:23:01 crc kubenswrapper[4580]: I0112 13:23:01.569163 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c19d-account-create-update-bz5j6"] Jan 12 13:23:01 crc kubenswrapper[4580]: W0112 13:23:01.573981 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd386d1_8b6e_4834_8ea5_74fe561d15f6.slice/crio-3c4689b7e2986d24e51aa49ca3946d70c2dc8eb1c7cf21f2c22512e5a427b79e WatchSource:0}: Error finding container 3c4689b7e2986d24e51aa49ca3946d70c2dc8eb1c7cf21f2c22512e5a427b79e: Status 404 returned error can't find the container with id 3c4689b7e2986d24e51aa49ca3946d70c2dc8eb1c7cf21f2c22512e5a427b79e Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.166277 4580 generic.go:334] "Generic (PLEG): container finished" podID="a5680228-c2e3-437e-b28e-bfd73513f81d" containerID="4931a3c4f183aa6a5a0f6b8e7d73b33bf6141d964e9d6549727ab6c7e4305eb6" exitCode=0 Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.166362 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d39-account-create-update-dd8gr" event={"ID":"a5680228-c2e3-437e-b28e-bfd73513f81d","Type":"ContainerDied","Data":"4931a3c4f183aa6a5a0f6b8e7d73b33bf6141d964e9d6549727ab6c7e4305eb6"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.166405 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d39-account-create-update-dd8gr" event={"ID":"a5680228-c2e3-437e-b28e-bfd73513f81d","Type":"ContainerStarted","Data":"07413b40b4ac99f413fc59876e19f318404efe56d72eb04cfa92fc16a309c435"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.168041 4580 generic.go:334] "Generic (PLEG): container finished" podID="8c9631d5-0489-43f6-a144-5869fc41f5ba" containerID="6cd367a2bba62fcda97b5ecd7833b2792aa06318c524cb6b475d741ecf88d8c5" exitCode=0 Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.168090 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b89-account-create-update-844zz" event={"ID":"8c9631d5-0489-43f6-a144-5869fc41f5ba","Type":"ContainerDied","Data":"6cd367a2bba62fcda97b5ecd7833b2792aa06318c524cb6b475d741ecf88d8c5"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.168182 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b89-account-create-update-844zz" event={"ID":"8c9631d5-0489-43f6-a144-5869fc41f5ba","Type":"ContainerStarted","Data":"b54fa94e471e4af7e668512ec71947a642b4d27aeaa423a4d1ec9c004c3109a6"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.169787 4580 generic.go:334] "Generic (PLEG): container finished" podID="ddd386d1-8b6e-4834-8ea5-74fe561d15f6" containerID="73929f388313091b972f0847349f3c28d7a920fe1b43673b6339a6cb627a0381" exitCode=0 Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.169835 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" event={"ID":"ddd386d1-8b6e-4834-8ea5-74fe561d15f6","Type":"ContainerDied","Data":"73929f388313091b972f0847349f3c28d7a920fe1b43673b6339a6cb627a0381"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.169851 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" event={"ID":"ddd386d1-8b6e-4834-8ea5-74fe561d15f6","Type":"ContainerStarted","Data":"3c4689b7e2986d24e51aa49ca3946d70c2dc8eb1c7cf21f2c22512e5a427b79e"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.171536 4580 generic.go:334] "Generic (PLEG): container finished" podID="feda8d3e-afea-4925-b154-4f13512cae76" containerID="e93de2ac4109ce79d0eaa282e59f321021614bb16ff78fa90131ebb239adaa13" exitCode=0 Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.171577 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6wrc4" event={"ID":"feda8d3e-afea-4925-b154-4f13512cae76","Type":"ContainerDied","Data":"e93de2ac4109ce79d0eaa282e59f321021614bb16ff78fa90131ebb239adaa13"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.172800 4580 generic.go:334] "Generic (PLEG): container finished" podID="caab40d7-1666-4a92-ba59-a9241ce91657" containerID="80dca3ad23ccce72018728df001c974e2e963f0c4f239e9ea80cc3d64a4924ea" exitCode=0 Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.172839 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gdmnx" event={"ID":"caab40d7-1666-4a92-ba59-a9241ce91657","Type":"ContainerDied","Data":"80dca3ad23ccce72018728df001c974e2e963f0c4f239e9ea80cc3d64a4924ea"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.172853 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gdmnx" event={"ID":"caab40d7-1666-4a92-ba59-a9241ce91657","Type":"ContainerStarted","Data":"70d76c088c59b6774e978527134d35087c37a9a73d7a29063a9475899927c41c"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.174229 4580 generic.go:334] "Generic (PLEG): container finished" podID="b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0" containerID="0a05c2691e6263f292d152c6e0d296d0255c4976b965402d3451e6e505924f0a" exitCode=0 Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.174276 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kzsjf" event={"ID":"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0","Type":"ContainerDied","Data":"0a05c2691e6263f292d152c6e0d296d0255c4976b965402d3451e6e505924f0a"} Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.846543 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.928231 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cc934c6e-8cf2-42f0-97bc-22537818cd51\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.928564 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-config-data\") pod \"cc934c6e-8cf2-42f0-97bc-22537818cd51\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.928608 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-public-tls-certs\") pod \"cc934c6e-8cf2-42f0-97bc-22537818cd51\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.928649 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-logs\") pod \"cc934c6e-8cf2-42f0-97bc-22537818cd51\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.929127 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-logs" (OuterVolumeSpecName: "logs") pod "cc934c6e-8cf2-42f0-97bc-22537818cd51" (UID: "cc934c6e-8cf2-42f0-97bc-22537818cd51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.929210 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-combined-ca-bundle\") pod \"cc934c6e-8cf2-42f0-97bc-22537818cd51\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.929233 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-httpd-run\") pod \"cc934c6e-8cf2-42f0-97bc-22537818cd51\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.929526 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-scripts\") pod \"cc934c6e-8cf2-42f0-97bc-22537818cd51\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.929560 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8j6z\" (UniqueName: \"kubernetes.io/projected/cc934c6e-8cf2-42f0-97bc-22537818cd51-kube-api-access-f8j6z\") pod \"cc934c6e-8cf2-42f0-97bc-22537818cd51\" (UID: \"cc934c6e-8cf2-42f0-97bc-22537818cd51\") " Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.929947 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.930752 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cc934c6e-8cf2-42f0-97bc-22537818cd51" (UID: "cc934c6e-8cf2-42f0-97bc-22537818cd51"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.934449 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc934c6e-8cf2-42f0-97bc-22537818cd51-kube-api-access-f8j6z" (OuterVolumeSpecName: "kube-api-access-f8j6z") pod "cc934c6e-8cf2-42f0-97bc-22537818cd51" (UID: "cc934c6e-8cf2-42f0-97bc-22537818cd51"). InnerVolumeSpecName "kube-api-access-f8j6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.946804 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-scripts" (OuterVolumeSpecName: "scripts") pod "cc934c6e-8cf2-42f0-97bc-22537818cd51" (UID: "cc934c6e-8cf2-42f0-97bc-22537818cd51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.947748 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "cc934c6e-8cf2-42f0-97bc-22537818cd51" (UID: "cc934c6e-8cf2-42f0-97bc-22537818cd51"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.970898 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cc934c6e-8cf2-42f0-97bc-22537818cd51" (UID: "cc934c6e-8cf2-42f0-97bc-22537818cd51"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.971316 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc934c6e-8cf2-42f0-97bc-22537818cd51" (UID: "cc934c6e-8cf2-42f0-97bc-22537818cd51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:02 crc kubenswrapper[4580]: I0112 13:23:02.972208 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-config-data" (OuterVolumeSpecName: "config-data") pod "cc934c6e-8cf2-42f0-97bc-22537818cd51" (UID: "cc934c6e-8cf2-42f0-97bc-22537818cd51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.031643 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.031670 4580 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc934c6e-8cf2-42f0-97bc-22537818cd51-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.031680 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.031690 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8j6z\" (UniqueName: \"kubernetes.io/projected/cc934c6e-8cf2-42f0-97bc-22537818cd51-kube-api-access-f8j6z\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.031737 4580 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.031747 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.031755 4580 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc934c6e-8cf2-42f0-97bc-22537818cd51-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.048601 4580 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.134088 4580 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.185434 4580 generic.go:334] "Generic (PLEG): container finished" podID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerID="0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2" exitCode=0 Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.185506 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.185556 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc934c6e-8cf2-42f0-97bc-22537818cd51","Type":"ContainerDied","Data":"0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2"} Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.185593 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc934c6e-8cf2-42f0-97bc-22537818cd51","Type":"ContainerDied","Data":"870e04df1cb487814ababdaae2f1c1f099c23263c78c9dc6f73c5191466248e5"} Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.185616 4580 scope.go:117] "RemoveContainer" containerID="0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.216280 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.225428 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.239430 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:23:03 crc kubenswrapper[4580]: E0112 13:23:03.239779 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerName="glance-log" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.239793 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerName="glance-log" Jan 12 13:23:03 crc kubenswrapper[4580]: E0112 13:23:03.239821 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerName="glance-httpd" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.239828 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerName="glance-httpd" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.239991 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerName="glance-httpd" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.240018 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" containerName="glance-log" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.240825 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.252116 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.253067 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.262852 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.286596 4580 scope.go:117] "RemoveContainer" containerID="70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.302279 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc934c6e-8cf2-42f0-97bc-22537818cd51" path="/var/lib/kubelet/pods/cc934c6e-8cf2-42f0-97bc-22537818cd51/volumes" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.325320 4580 scope.go:117] "RemoveContainer" containerID="0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2" Jan 12 13:23:03 crc kubenswrapper[4580]: E0112 13:23:03.326507 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2\": container with ID starting with 0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2 not found: ID does not exist" containerID="0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.326542 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2"} err="failed to get container status \"0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2\": rpc error: code = NotFound desc = could not find container \"0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2\": container with ID starting with 0f596d2828d4a2a29bbe82b8268fc643ef226eb31379dfad3f23cc1afc8ee7c2 not found: ID does not exist" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.326565 4580 scope.go:117] "RemoveContainer" containerID="70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142" Jan 12 13:23:03 crc kubenswrapper[4580]: E0112 13:23:03.326834 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142\": container with ID starting with 70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142 not found: ID does not exist" containerID="70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.326851 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142"} err="failed to get container status \"70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142\": rpc error: code = NotFound desc = could not find container \"70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142\": container with ID starting with 70996c4ed66f5d822afbb8fed848e56cafccf9db4b9f11f10012d1b839514142 not found: ID does not exist" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.338372 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.338427 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.338461 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.338497 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.338544 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.338579 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.338610 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq9sp\" (UniqueName: \"kubernetes.io/projected/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-kube-api-access-jq9sp\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.338652 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.440723 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.440797 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq9sp\" (UniqueName: \"kubernetes.io/projected/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-kube-api-access-jq9sp\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.440866 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.440915 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.440991 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.441030 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.441073 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.441164 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.441284 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.443145 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.447966 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.448029 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.448054 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.448693 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.450782 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.464266 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq9sp\" (UniqueName: \"kubernetes.io/projected/3e7614df-e73b-47f5-b7f0-d942ea24c4f0-kube-api-access-jq9sp\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.483679 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7614df-e73b-47f5-b7f0-d942ea24c4f0\") " pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.574955 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.658897 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.716286 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.752039 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qblmh\" (UniqueName: \"kubernetes.io/projected/caab40d7-1666-4a92-ba59-a9241ce91657-kube-api-access-qblmh\") pod \"caab40d7-1666-4a92-ba59-a9241ce91657\" (UID: \"caab40d7-1666-4a92-ba59-a9241ce91657\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.752084 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caab40d7-1666-4a92-ba59-a9241ce91657-operator-scripts\") pod \"caab40d7-1666-4a92-ba59-a9241ce91657\" (UID: \"caab40d7-1666-4a92-ba59-a9241ce91657\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.752652 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caab40d7-1666-4a92-ba59-a9241ce91657-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "caab40d7-1666-4a92-ba59-a9241ce91657" (UID: "caab40d7-1666-4a92-ba59-a9241ce91657"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.756406 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caab40d7-1666-4a92-ba59-a9241ce91657-kube-api-access-qblmh" (OuterVolumeSpecName: "kube-api-access-qblmh") pod "caab40d7-1666-4a92-ba59-a9241ce91657" (UID: "caab40d7-1666-4a92-ba59-a9241ce91657"). InnerVolumeSpecName "kube-api-access-qblmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.788933 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.820508 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.824667 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.844264 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.853678 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578f8\" (UniqueName: \"kubernetes.io/projected/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-kube-api-access-578f8\") pod \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\" (UID: \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.853879 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-operator-scripts\") pod \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\" (UID: \"ddd386d1-8b6e-4834-8ea5-74fe561d15f6\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.854416 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qblmh\" (UniqueName: \"kubernetes.io/projected/caab40d7-1666-4a92-ba59-a9241ce91657-kube-api-access-qblmh\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.854434 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caab40d7-1666-4a92-ba59-a9241ce91657-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.854514 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ddd386d1-8b6e-4834-8ea5-74fe561d15f6" (UID: "ddd386d1-8b6e-4834-8ea5-74fe561d15f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.862217 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-kube-api-access-578f8" (OuterVolumeSpecName: "kube-api-access-578f8") pod "ddd386d1-8b6e-4834-8ea5-74fe561d15f6" (UID: "ddd386d1-8b6e-4834-8ea5-74fe561d15f6"). InnerVolumeSpecName "kube-api-access-578f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.956620 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5680228-c2e3-437e-b28e-bfd73513f81d-operator-scripts\") pod \"a5680228-c2e3-437e-b28e-bfd73513f81d\" (UID: \"a5680228-c2e3-437e-b28e-bfd73513f81d\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.956943 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-operator-scripts\") pod \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\" (UID: \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.956988 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc8vx\" (UniqueName: \"kubernetes.io/projected/a5680228-c2e3-437e-b28e-bfd73513f81d-kube-api-access-pc8vx\") pod \"a5680228-c2e3-437e-b28e-bfd73513f81d\" (UID: \"a5680228-c2e3-437e-b28e-bfd73513f81d\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.957052 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58sxt\" (UniqueName: \"kubernetes.io/projected/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-kube-api-access-58sxt\") pod \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\" (UID: \"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.957140 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp9pb\" (UniqueName: \"kubernetes.io/projected/feda8d3e-afea-4925-b154-4f13512cae76-kube-api-access-mp9pb\") pod \"feda8d3e-afea-4925-b154-4f13512cae76\" (UID: \"feda8d3e-afea-4925-b154-4f13512cae76\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.957164 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5680228-c2e3-437e-b28e-bfd73513f81d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5680228-c2e3-437e-b28e-bfd73513f81d" (UID: "a5680228-c2e3-437e-b28e-bfd73513f81d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.957294 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9631d5-0489-43f6-a144-5869fc41f5ba-operator-scripts\") pod \"8c9631d5-0489-43f6-a144-5869fc41f5ba\" (UID: \"8c9631d5-0489-43f6-a144-5869fc41f5ba\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.957483 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feda8d3e-afea-4925-b154-4f13512cae76-operator-scripts\") pod \"feda8d3e-afea-4925-b154-4f13512cae76\" (UID: \"feda8d3e-afea-4925-b154-4f13512cae76\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.957573 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhldx\" (UniqueName: \"kubernetes.io/projected/8c9631d5-0489-43f6-a144-5869fc41f5ba-kube-api-access-rhldx\") pod \"8c9631d5-0489-43f6-a144-5869fc41f5ba\" (UID: \"8c9631d5-0489-43f6-a144-5869fc41f5ba\") " Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.958379 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578f8\" (UniqueName: \"kubernetes.io/projected/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-kube-api-access-578f8\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.958403 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5680228-c2e3-437e-b28e-bfd73513f81d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.958415 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd386d1-8b6e-4834-8ea5-74fe561d15f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.959306 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c9631d5-0489-43f6-a144-5869fc41f5ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c9631d5-0489-43f6-a144-5869fc41f5ba" (UID: "8c9631d5-0489-43f6-a144-5869fc41f5ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.959325 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0" (UID: "b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.959351 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feda8d3e-afea-4925-b154-4f13512cae76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "feda8d3e-afea-4925-b154-4f13512cae76" (UID: "feda8d3e-afea-4925-b154-4f13512cae76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.962760 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c9631d5-0489-43f6-a144-5869fc41f5ba-kube-api-access-rhldx" (OuterVolumeSpecName: "kube-api-access-rhldx") pod "8c9631d5-0489-43f6-a144-5869fc41f5ba" (UID: "8c9631d5-0489-43f6-a144-5869fc41f5ba"). InnerVolumeSpecName "kube-api-access-rhldx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.962821 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-kube-api-access-58sxt" (OuterVolumeSpecName: "kube-api-access-58sxt") pod "b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0" (UID: "b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0"). InnerVolumeSpecName "kube-api-access-58sxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.963258 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feda8d3e-afea-4925-b154-4f13512cae76-kube-api-access-mp9pb" (OuterVolumeSpecName: "kube-api-access-mp9pb") pod "feda8d3e-afea-4925-b154-4f13512cae76" (UID: "feda8d3e-afea-4925-b154-4f13512cae76"). InnerVolumeSpecName "kube-api-access-mp9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:03 crc kubenswrapper[4580]: I0112 13:23:03.963342 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5680228-c2e3-437e-b28e-bfd73513f81d-kube-api-access-pc8vx" (OuterVolumeSpecName: "kube-api-access-pc8vx") pod "a5680228-c2e3-437e-b28e-bfd73513f81d" (UID: "a5680228-c2e3-437e-b28e-bfd73513f81d"). InnerVolumeSpecName "kube-api-access-pc8vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.051752 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.059649 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.059674 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc8vx\" (UniqueName: \"kubernetes.io/projected/a5680228-c2e3-437e-b28e-bfd73513f81d-kube-api-access-pc8vx\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.059685 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58sxt\" (UniqueName: \"kubernetes.io/projected/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0-kube-api-access-58sxt\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.059694 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp9pb\" (UniqueName: \"kubernetes.io/projected/feda8d3e-afea-4925-b154-4f13512cae76-kube-api-access-mp9pb\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.059702 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9631d5-0489-43f6-a144-5869fc41f5ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.059711 4580 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feda8d3e-afea-4925-b154-4f13512cae76-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.059719 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhldx\" (UniqueName: \"kubernetes.io/projected/8c9631d5-0489-43f6-a144-5869fc41f5ba-kube-api-access-rhldx\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.161482 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87kv\" (UniqueName: \"kubernetes.io/projected/52d5c384-ad20-413e-a8ec-183b114d9901-kube-api-access-l87kv\") pod \"52d5c384-ad20-413e-a8ec-183b114d9901\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.161531 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-httpd-run\") pod \"52d5c384-ad20-413e-a8ec-183b114d9901\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.161584 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-internal-tls-certs\") pod \"52d5c384-ad20-413e-a8ec-183b114d9901\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.161615 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-config-data\") pod \"52d5c384-ad20-413e-a8ec-183b114d9901\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.161629 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-scripts\") pod \"52d5c384-ad20-413e-a8ec-183b114d9901\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.161657 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-combined-ca-bundle\") pod \"52d5c384-ad20-413e-a8ec-183b114d9901\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.161702 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"52d5c384-ad20-413e-a8ec-183b114d9901\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.161764 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-logs\") pod \"52d5c384-ad20-413e-a8ec-183b114d9901\" (UID: \"52d5c384-ad20-413e-a8ec-183b114d9901\") " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.162293 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "52d5c384-ad20-413e-a8ec-183b114d9901" (UID: "52d5c384-ad20-413e-a8ec-183b114d9901"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.162604 4580 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.162648 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-logs" (OuterVolumeSpecName: "logs") pod "52d5c384-ad20-413e-a8ec-183b114d9901" (UID: "52d5c384-ad20-413e-a8ec-183b114d9901"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.164667 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d5c384-ad20-413e-a8ec-183b114d9901-kube-api-access-l87kv" (OuterVolumeSpecName: "kube-api-access-l87kv") pod "52d5c384-ad20-413e-a8ec-183b114d9901" (UID: "52d5c384-ad20-413e-a8ec-183b114d9901"). InnerVolumeSpecName "kube-api-access-l87kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.164877 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "52d5c384-ad20-413e-a8ec-183b114d9901" (UID: "52d5c384-ad20-413e-a8ec-183b114d9901"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.182832 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-scripts" (OuterVolumeSpecName: "scripts") pod "52d5c384-ad20-413e-a8ec-183b114d9901" (UID: "52d5c384-ad20-413e-a8ec-183b114d9901"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.192365 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52d5c384-ad20-413e-a8ec-183b114d9901" (UID: "52d5c384-ad20-413e-a8ec-183b114d9901"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.195700 4580 generic.go:334] "Generic (PLEG): container finished" podID="52d5c384-ad20-413e-a8ec-183b114d9901" containerID="370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd" exitCode=0 Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.195767 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d5c384-ad20-413e-a8ec-183b114d9901","Type":"ContainerDied","Data":"370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd"} Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.195778 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.195796 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52d5c384-ad20-413e-a8ec-183b114d9901","Type":"ContainerDied","Data":"39f8a7c780c9bde2261291778e117bd084b1ddd77d508397ee3991d9463883de"} Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.195816 4580 scope.go:117] "RemoveContainer" containerID="370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.199746 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gdmnx" event={"ID":"caab40d7-1666-4a92-ba59-a9241ce91657","Type":"ContainerDied","Data":"70d76c088c59b6774e978527134d35087c37a9a73d7a29063a9475899927c41c"} Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.199782 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d76c088c59b6774e978527134d35087c37a9a73d7a29063a9475899927c41c" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.199834 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gdmnx" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.202650 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kzsjf" event={"ID":"b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0","Type":"ContainerDied","Data":"a7a6e1487f1b8a4757d3b50c64b2d120589975a4569706083a68fcfb110792ac"} Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.202706 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7a6e1487f1b8a4757d3b50c64b2d120589975a4569706083a68fcfb110792ac" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.203044 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kzsjf" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.204341 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-config-data" (OuterVolumeSpecName: "config-data") pod "52d5c384-ad20-413e-a8ec-183b114d9901" (UID: "52d5c384-ad20-413e-a8ec-183b114d9901"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.208890 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d39-account-create-update-dd8gr" event={"ID":"a5680228-c2e3-437e-b28e-bfd73513f81d","Type":"ContainerDied","Data":"07413b40b4ac99f413fc59876e19f318404efe56d72eb04cfa92fc16a309c435"} Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.208934 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07413b40b4ac99f413fc59876e19f318404efe56d72eb04cfa92fc16a309c435" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.209038 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d39-account-create-update-dd8gr" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.214548 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" event={"ID":"ddd386d1-8b6e-4834-8ea5-74fe561d15f6","Type":"ContainerDied","Data":"3c4689b7e2986d24e51aa49ca3946d70c2dc8eb1c7cf21f2c22512e5a427b79e"} Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.214573 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c4689b7e2986d24e51aa49ca3946d70c2dc8eb1c7cf21f2c22512e5a427b79e" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.214610 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c19d-account-create-update-bz5j6" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.216460 4580 scope.go:117] "RemoveContainer" containerID="cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.218097 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b89-account-create-update-844zz" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.218075 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b89-account-create-update-844zz" event={"ID":"8c9631d5-0489-43f6-a144-5869fc41f5ba","Type":"ContainerDied","Data":"b54fa94e471e4af7e668512ec71947a642b4d27aeaa423a4d1ec9c004c3109a6"} Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.218515 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54fa94e471e4af7e668512ec71947a642b4d27aeaa423a4d1ec9c004c3109a6" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.221817 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6wrc4" event={"ID":"feda8d3e-afea-4925-b154-4f13512cae76","Type":"ContainerDied","Data":"40fe63e8f11caa72804aa1e01ef044870c4f667937d11f5f5466013f348537d5"} Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.221850 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40fe63e8f11caa72804aa1e01ef044870c4f667937d11f5f5466013f348537d5" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.221855 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6wrc4" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.228290 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "52d5c384-ad20-413e-a8ec-183b114d9901" (UID: "52d5c384-ad20-413e-a8ec-183b114d9901"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.238156 4580 scope.go:117] "RemoveContainer" containerID="370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.239445 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd\": container with ID starting with 370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd not found: ID does not exist" containerID="370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.239481 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd"} err="failed to get container status \"370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd\": rpc error: code = NotFound desc = could not find container \"370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd\": container with ID starting with 370cbcfa3cf6f08989ecad0ac709dd897c929c19ccc1caa2ca1c4adc558a05dd not found: ID does not exist" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.239507 4580 scope.go:117] "RemoveContainer" containerID="cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.239727 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0\": container with ID starting with cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0 not found: ID does not exist" containerID="cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.239762 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0"} err="failed to get container status \"cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0\": rpc error: code = NotFound desc = could not find container \"cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0\": container with ID starting with cff7f773004b433c088ab6ec51a18d40ad182002bce30bba829a98778a38dbc0 not found: ID does not exist" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.264549 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d5c384-ad20-413e-a8ec-183b114d9901-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.264576 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l87kv\" (UniqueName: \"kubernetes.io/projected/52d5c384-ad20-413e-a8ec-183b114d9901-kube-api-access-l87kv\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.264586 4580 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.264596 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.264603 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.264611 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5c384-ad20-413e-a8ec-183b114d9901-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.264640 4580 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.267358 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.278739 4580 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.367061 4580 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.528933 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.538325 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.548698 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.549194 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feda8d3e-afea-4925-b154-4f13512cae76" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554064 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="feda8d3e-afea-4925-b154-4f13512cae76" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.554094 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd386d1-8b6e-4834-8ea5-74fe561d15f6" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554115 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd386d1-8b6e-4834-8ea5-74fe561d15f6" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.554133 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554140 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.554154 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9631d5-0489-43f6-a144-5869fc41f5ba" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554160 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9631d5-0489-43f6-a144-5869fc41f5ba" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.554168 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" containerName="glance-log" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554174 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" containerName="glance-log" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.554184 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caab40d7-1666-4a92-ba59-a9241ce91657" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554189 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="caab40d7-1666-4a92-ba59-a9241ce91657" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.554197 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5680228-c2e3-437e-b28e-bfd73513f81d" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554203 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5680228-c2e3-437e-b28e-bfd73513f81d" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: E0112 13:23:04.554215 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" containerName="glance-httpd" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554221 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" containerName="glance-httpd" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554503 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5680228-c2e3-437e-b28e-bfd73513f81d" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554519 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd386d1-8b6e-4834-8ea5-74fe561d15f6" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554541 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="feda8d3e-afea-4925-b154-4f13512cae76" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554551 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" containerName="glance-httpd" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554561 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9631d5-0489-43f6-a144-5869fc41f5ba" containerName="mariadb-account-create-update" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554571 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="caab40d7-1666-4a92-ba59-a9241ce91657" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554584 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0" containerName="mariadb-database-create" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.554594 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" containerName="glance-log" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.555492 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.556947 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.557908 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.574448 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.672812 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.672897 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.673054 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.673072 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.673133 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wbw\" (UniqueName: \"kubernetes.io/projected/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-kube-api-access-n9wbw\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.673186 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.673207 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.673227 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-logs\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.775648 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.775687 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.775707 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-logs\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.775770 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.775787 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.775822 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.775837 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.775871 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wbw\" (UniqueName: \"kubernetes.io/projected/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-kube-api-access-n9wbw\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.776365 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.776452 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-logs\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.776615 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.779095 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.781365 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.782830 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.783270 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.792883 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wbw\" (UniqueName: \"kubernetes.io/projected/026b9966-ae00-4f6a-be8d-bb1d9fffbef3-kube-api-access-n9wbw\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.796586 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"026b9966-ae00-4f6a-be8d-bb1d9fffbef3\") " pod="openstack/glance-default-internal-api-0" Jan 12 13:23:04 crc kubenswrapper[4580]: I0112 13:23:04.879258 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.256534 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7614df-e73b-47f5-b7f0-d942ea24c4f0","Type":"ContainerStarted","Data":"3ed28b1f684b2445f357abcd6106b7babfea4cf37fe0a930025b44711b6db2f3"} Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.256837 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7614df-e73b-47f5-b7f0-d942ea24c4f0","Type":"ContainerStarted","Data":"acffe4acd4d6500ae459d761d600c798ac4332d1d8d7a6af65d497a7c59eb371"} Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.292830 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d5c384-ad20-413e-a8ec-183b114d9901" path="/var/lib/kubelet/pods/52d5c384-ad20-413e-a8ec-183b114d9901/volumes" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.400215 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 12 13:23:05 crc kubenswrapper[4580]: W0112 13:23:05.402434 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod026b9966_ae00_4f6a_be8d_bb1d9fffbef3.slice/crio-5dcb0c749284f93f9ad1febbcc95e8e51779347fda8efea231f9cd715b2e665a WatchSource:0}: Error finding container 5dcb0c749284f93f9ad1febbcc95e8e51779347fda8efea231f9cd715b2e665a: Status 404 returned error can't find the container with id 5dcb0c749284f93f9ad1febbcc95e8e51779347fda8efea231f9cd715b2e665a Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.822087 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vq8g"] Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.823388 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.829603 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5bbdg" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.831565 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.831916 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.841928 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vq8g"] Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.899282 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rt7d\" (UniqueName: \"kubernetes.io/projected/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-kube-api-access-9rt7d\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.899348 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-config-data\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.899398 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-scripts\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:05 crc kubenswrapper[4580]: I0112 13:23:05.899522 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.002613 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rt7d\" (UniqueName: \"kubernetes.io/projected/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-kube-api-access-9rt7d\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.002738 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-config-data\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.002827 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-scripts\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.002867 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.008206 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-config-data\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.008366 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.008561 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-scripts\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.018571 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rt7d\" (UniqueName: \"kubernetes.io/projected/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-kube-api-access-9rt7d\") pod \"nova-cell0-conductor-db-sync-4vq8g\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.153275 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.282034 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7614df-e73b-47f5-b7f0-d942ea24c4f0","Type":"ContainerStarted","Data":"18b0f597cfffc29ba970ca6b86d01c6d32ab38cb166167722ad2459b19eff5c3"} Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.292966 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"026b9966-ae00-4f6a-be8d-bb1d9fffbef3","Type":"ContainerStarted","Data":"d0a58d85527c0aed4074aa607c534e519603813be283df095c7af724c37ce965"} Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.293022 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"026b9966-ae00-4f6a-be8d-bb1d9fffbef3","Type":"ContainerStarted","Data":"5dcb0c749284f93f9ad1febbcc95e8e51779347fda8efea231f9cd715b2e665a"} Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.308511 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.308493993 podStartE2EDuration="3.308493993s" podCreationTimestamp="2026-01-12 13:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:06.304277146 +0000 UTC m=+985.348495836" watchObservedRunningTime="2026-01-12 13:23:06.308493993 +0000 UTC m=+985.352712683" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.607137 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vq8g"] Jan 12 13:23:06 crc kubenswrapper[4580]: W0112 13:23:06.624435 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded8ee3a7_b81f_4ff7_9647_2eb207531a43.slice/crio-89d311067498f924511dd2c94aabab652dfafe87a9f6ea03c364f05cf50eaedd WatchSource:0}: Error finding container 89d311067498f924511dd2c94aabab652dfafe87a9f6ea03c364f05cf50eaedd: Status 404 returned error can't find the container with id 89d311067498f924511dd2c94aabab652dfafe87a9f6ea03c364f05cf50eaedd Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.769029 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.925572 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl47g\" (UniqueName: \"kubernetes.io/projected/46db66c5-efa1-4357-8eef-140731c74ef0-kube-api-access-vl47g\") pod \"46db66c5-efa1-4357-8eef-140731c74ef0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.925673 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-log-httpd\") pod \"46db66c5-efa1-4357-8eef-140731c74ef0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.925716 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-combined-ca-bundle\") pod \"46db66c5-efa1-4357-8eef-140731c74ef0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.925756 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-sg-core-conf-yaml\") pod \"46db66c5-efa1-4357-8eef-140731c74ef0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.925772 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-config-data\") pod \"46db66c5-efa1-4357-8eef-140731c74ef0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.925787 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-run-httpd\") pod \"46db66c5-efa1-4357-8eef-140731c74ef0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.925804 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-scripts\") pod \"46db66c5-efa1-4357-8eef-140731c74ef0\" (UID: \"46db66c5-efa1-4357-8eef-140731c74ef0\") " Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.926181 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46db66c5-efa1-4357-8eef-140731c74ef0" (UID: "46db66c5-efa1-4357-8eef-140731c74ef0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.926596 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46db66c5-efa1-4357-8eef-140731c74ef0" (UID: "46db66c5-efa1-4357-8eef-140731c74ef0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.931915 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46db66c5-efa1-4357-8eef-140731c74ef0-kube-api-access-vl47g" (OuterVolumeSpecName: "kube-api-access-vl47g") pod "46db66c5-efa1-4357-8eef-140731c74ef0" (UID: "46db66c5-efa1-4357-8eef-140731c74ef0"). InnerVolumeSpecName "kube-api-access-vl47g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.932231 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-scripts" (OuterVolumeSpecName: "scripts") pod "46db66c5-efa1-4357-8eef-140731c74ef0" (UID: "46db66c5-efa1-4357-8eef-140731c74ef0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.950743 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46db66c5-efa1-4357-8eef-140731c74ef0" (UID: "46db66c5-efa1-4357-8eef-140731c74ef0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:06 crc kubenswrapper[4580]: I0112 13:23:06.990512 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46db66c5-efa1-4357-8eef-140731c74ef0" (UID: "46db66c5-efa1-4357-8eef-140731c74ef0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.005631 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-config-data" (OuterVolumeSpecName: "config-data") pod "46db66c5-efa1-4357-8eef-140731c74ef0" (UID: "46db66c5-efa1-4357-8eef-140731c74ef0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.029728 4580 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.029778 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.029795 4580 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.029804 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.029814 4580 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46db66c5-efa1-4357-8eef-140731c74ef0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.029823 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46db66c5-efa1-4357-8eef-140731c74ef0-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.029832 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl47g\" (UniqueName: \"kubernetes.io/projected/46db66c5-efa1-4357-8eef-140731c74ef0-kube-api-access-vl47g\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.309689 4580 generic.go:334] "Generic (PLEG): container finished" podID="46db66c5-efa1-4357-8eef-140731c74ef0" containerID="57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228" exitCode=0 Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.309756 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.309755 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerDied","Data":"57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228"} Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.309847 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46db66c5-efa1-4357-8eef-140731c74ef0","Type":"ContainerDied","Data":"0d250e569de9c18740539a522d8e0c0ebe9c218b288c101dd789a89a7f43ce1b"} Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.309896 4580 scope.go:117] "RemoveContainer" containerID="e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.313761 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"026b9966-ae00-4f6a-be8d-bb1d9fffbef3","Type":"ContainerStarted","Data":"b9a1aaf013e7d15f8a46345f00295e62d2176b1249a4171d1cf8d2ee5926f8fd"} Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.316956 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" event={"ID":"ed8ee3a7-b81f-4ff7-9647-2eb207531a43","Type":"ContainerStarted","Data":"89d311067498f924511dd2c94aabab652dfafe87a9f6ea03c364f05cf50eaedd"} Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.347265 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.347250306 podStartE2EDuration="3.347250306s" podCreationTimestamp="2026-01-12 13:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:07.343409596 +0000 UTC m=+986.387628285" watchObservedRunningTime="2026-01-12 13:23:07.347250306 +0000 UTC m=+986.391468997" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.375841 4580 scope.go:117] "RemoveContainer" containerID="8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.390149 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.398965 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416075 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:07 crc kubenswrapper[4580]: E0112 13:23:07.416466 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="sg-core" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416484 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="sg-core" Jan 12 13:23:07 crc kubenswrapper[4580]: E0112 13:23:07.416514 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="ceilometer-notification-agent" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416520 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="ceilometer-notification-agent" Jan 12 13:23:07 crc kubenswrapper[4580]: E0112 13:23:07.416532 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="proxy-httpd" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416537 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="proxy-httpd" Jan 12 13:23:07 crc kubenswrapper[4580]: E0112 13:23:07.416544 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="ceilometer-central-agent" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416550 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="ceilometer-central-agent" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416699 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="sg-core" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416720 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="proxy-httpd" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416732 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="ceilometer-notification-agent" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.416746 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" containerName="ceilometer-central-agent" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.428146 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.432469 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.434756 4580 scope.go:117] "RemoveContainer" containerID="1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.445337 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.451992 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.482763 4580 scope.go:117] "RemoveContainer" containerID="57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.511280 4580 scope.go:117] "RemoveContainer" containerID="e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2" Jan 12 13:23:07 crc kubenswrapper[4580]: E0112 13:23:07.515183 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2\": container with ID starting with e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2 not found: ID does not exist" containerID="e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.515219 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2"} err="failed to get container status \"e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2\": rpc error: code = NotFound desc = could not find container \"e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2\": container with ID starting with e47d6116f8379eb71288fae98c15f6d8d53bc670c1225cf2291495203ce717b2 not found: ID does not exist" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.515242 4580 scope.go:117] "RemoveContainer" containerID="8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d" Jan 12 13:23:07 crc kubenswrapper[4580]: E0112 13:23:07.519747 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d\": container with ID starting with 8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d not found: ID does not exist" containerID="8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.519781 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d"} err="failed to get container status \"8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d\": rpc error: code = NotFound desc = could not find container \"8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d\": container with ID starting with 8f200f4be45c5d4e2b4459601e0c9fcc1e6a7dd734801a5dea0f9bd58207550d not found: ID does not exist" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.519800 4580 scope.go:117] "RemoveContainer" containerID="1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3" Jan 12 13:23:07 crc kubenswrapper[4580]: E0112 13:23:07.520080 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3\": container with ID starting with 1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3 not found: ID does not exist" containerID="1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.520097 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3"} err="failed to get container status \"1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3\": rpc error: code = NotFound desc = could not find container \"1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3\": container with ID starting with 1b465deb8da74e9b81391c1f4bcd5a4fe422536792e4a43895709ae492760fb3 not found: ID does not exist" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.520126 4580 scope.go:117] "RemoveContainer" containerID="57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228" Jan 12 13:23:07 crc kubenswrapper[4580]: E0112 13:23:07.521475 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228\": container with ID starting with 57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228 not found: ID does not exist" containerID="57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.521503 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228"} err="failed to get container status \"57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228\": rpc error: code = NotFound desc = could not find container \"57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228\": container with ID starting with 57ea9ed11203cacf9fa0e74a2c5d80b1b5b27117741156772fd5944990dc9228 not found: ID does not exist" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.548354 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzz8\" (UniqueName: \"kubernetes.io/projected/291e23aa-9411-437f-b6f4-153af1cb50e1-kube-api-access-qkzz8\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.548403 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.548621 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-config-data\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.548769 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-log-httpd\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.548815 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-scripts\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.548859 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-run-httpd\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.548883 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651120 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-config-data\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651275 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-log-httpd\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651318 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-scripts\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651358 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-run-httpd\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651380 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651441 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkzz8\" (UniqueName: \"kubernetes.io/projected/291e23aa-9411-437f-b6f4-153af1cb50e1-kube-api-access-qkzz8\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651461 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651874 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-log-httpd\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.651952 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-run-httpd\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.655645 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.655708 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-scripts\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.657153 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-config-data\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.665828 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.666525 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkzz8\" (UniqueName: \"kubernetes.io/projected/291e23aa-9411-437f-b6f4-153af1cb50e1-kube-api-access-qkzz8\") pod \"ceilometer-0\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " pod="openstack/ceilometer-0" Jan 12 13:23:07 crc kubenswrapper[4580]: I0112 13:23:07.760313 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:23:08 crc kubenswrapper[4580]: I0112 13:23:08.186480 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:08 crc kubenswrapper[4580]: I0112 13:23:08.354464 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerStarted","Data":"3c3eb458baf520a7558055c4e919cc5b4a05bb3dcff076795886ef46272aa2d7"} Jan 12 13:23:09 crc kubenswrapper[4580]: I0112 13:23:09.303951 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46db66c5-efa1-4357-8eef-140731c74ef0" path="/var/lib/kubelet/pods/46db66c5-efa1-4357-8eef-140731c74ef0/volumes" Jan 12 13:23:09 crc kubenswrapper[4580]: I0112 13:23:09.367662 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerStarted","Data":"bdcae0237b0a1f5c47811c4f32c5373ed0b836ddfa608721615e29f04b4178d7"} Jan 12 13:23:13 crc kubenswrapper[4580]: I0112 13:23:13.576602 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 12 13:23:13 crc kubenswrapper[4580]: I0112 13:23:13.577224 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 12 13:23:13 crc kubenswrapper[4580]: I0112 13:23:13.610117 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 12 13:23:13 crc kubenswrapper[4580]: I0112 13:23:13.611093 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.420538 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerStarted","Data":"1d1bc6d5ce33679b74173dde51a26640ed59de6b6e5b9b384cebc98ecc61b169"} Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.422591 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" event={"ID":"ed8ee3a7-b81f-4ff7-9647-2eb207531a43","Type":"ContainerStarted","Data":"5fbefe9e2e7271564a8b0734d36f0fca88547ad3daebb81d64310d6538b6534c"} Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.422974 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.423117 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.440440 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" podStartSLOduration=2.376286724 podStartE2EDuration="9.440425725s" podCreationTimestamp="2026-01-12 13:23:05 +0000 UTC" firstStartedPulling="2026-01-12 13:23:06.62594863 +0000 UTC m=+985.670167320" lastFinishedPulling="2026-01-12 13:23:13.690087632 +0000 UTC m=+992.734306321" observedRunningTime="2026-01-12 13:23:14.440240386 +0000 UTC m=+993.484459076" watchObservedRunningTime="2026-01-12 13:23:14.440425725 +0000 UTC m=+993.484644415" Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.880079 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.880428 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.906993 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:14 crc kubenswrapper[4580]: I0112 13:23:14.917428 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:15 crc kubenswrapper[4580]: I0112 13:23:15.433852 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerStarted","Data":"d018fbe05e8acc6b74837bb042a6b725901c89ef267ff742c376c35430602c7e"} Jan 12 13:23:15 crc kubenswrapper[4580]: I0112 13:23:15.434204 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:15 crc kubenswrapper[4580]: I0112 13:23:15.434698 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:16 crc kubenswrapper[4580]: I0112 13:23:16.075266 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 12 13:23:16 crc kubenswrapper[4580]: I0112 13:23:16.079843 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 12 13:23:16 crc kubenswrapper[4580]: I0112 13:23:16.442855 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerStarted","Data":"8396c0b2265f5e58dbaf0876b1567665e087e08a74aa3b1d728c8cfc8a5413be"} Jan 12 13:23:16 crc kubenswrapper[4580]: I0112 13:23:16.465540 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7645734530000001 podStartE2EDuration="9.465524416s" podCreationTimestamp="2026-01-12 13:23:07 +0000 UTC" firstStartedPulling="2026-01-12 13:23:08.204030515 +0000 UTC m=+987.248249204" lastFinishedPulling="2026-01-12 13:23:15.904981477 +0000 UTC m=+994.949200167" observedRunningTime="2026-01-12 13:23:16.458981116 +0000 UTC m=+995.503199805" watchObservedRunningTime="2026-01-12 13:23:16.465524416 +0000 UTC m=+995.509743105" Jan 12 13:23:16 crc kubenswrapper[4580]: I0112 13:23:16.949991 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:23:16 crc kubenswrapper[4580]: I0112 13:23:16.950078 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:23:17 crc kubenswrapper[4580]: I0112 13:23:17.254305 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:17 crc kubenswrapper[4580]: I0112 13:23:17.258150 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 12 13:23:17 crc kubenswrapper[4580]: I0112 13:23:17.453337 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 12 13:23:20 crc kubenswrapper[4580]: I0112 13:23:20.483883 4580 generic.go:334] "Generic (PLEG): container finished" podID="ed8ee3a7-b81f-4ff7-9647-2eb207531a43" containerID="5fbefe9e2e7271564a8b0734d36f0fca88547ad3daebb81d64310d6538b6534c" exitCode=0 Jan 12 13:23:20 crc kubenswrapper[4580]: I0112 13:23:20.483973 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" event={"ID":"ed8ee3a7-b81f-4ff7-9647-2eb207531a43","Type":"ContainerDied","Data":"5fbefe9e2e7271564a8b0734d36f0fca88547ad3daebb81d64310d6538b6534c"} Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.793605 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.963500 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-scripts\") pod \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.963592 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rt7d\" (UniqueName: \"kubernetes.io/projected/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-kube-api-access-9rt7d\") pod \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.963635 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-combined-ca-bundle\") pod \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.963690 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-config-data\") pod \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\" (UID: \"ed8ee3a7-b81f-4ff7-9647-2eb207531a43\") " Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.973531 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-kube-api-access-9rt7d" (OuterVolumeSpecName: "kube-api-access-9rt7d") pod "ed8ee3a7-b81f-4ff7-9647-2eb207531a43" (UID: "ed8ee3a7-b81f-4ff7-9647-2eb207531a43"). InnerVolumeSpecName "kube-api-access-9rt7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.974651 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-scripts" (OuterVolumeSpecName: "scripts") pod "ed8ee3a7-b81f-4ff7-9647-2eb207531a43" (UID: "ed8ee3a7-b81f-4ff7-9647-2eb207531a43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.987981 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-config-data" (OuterVolumeSpecName: "config-data") pod "ed8ee3a7-b81f-4ff7-9647-2eb207531a43" (UID: "ed8ee3a7-b81f-4ff7-9647-2eb207531a43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:21 crc kubenswrapper[4580]: I0112 13:23:21.989648 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed8ee3a7-b81f-4ff7-9647-2eb207531a43" (UID: "ed8ee3a7-b81f-4ff7-9647-2eb207531a43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.066379 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.066404 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.066414 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rt7d\" (UniqueName: \"kubernetes.io/projected/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-kube-api-access-9rt7d\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.066425 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ee3a7-b81f-4ff7-9647-2eb207531a43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.502496 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" event={"ID":"ed8ee3a7-b81f-4ff7-9647-2eb207531a43","Type":"ContainerDied","Data":"89d311067498f924511dd2c94aabab652dfafe87a9f6ea03c364f05cf50eaedd"} Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.502554 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4vq8g" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.502566 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d311067498f924511dd2c94aabab652dfafe87a9f6ea03c364f05cf50eaedd" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.586561 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 12 13:23:22 crc kubenswrapper[4580]: E0112 13:23:22.586961 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8ee3a7-b81f-4ff7-9647-2eb207531a43" containerName="nova-cell0-conductor-db-sync" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.586977 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8ee3a7-b81f-4ff7-9647-2eb207531a43" containerName="nova-cell0-conductor-db-sync" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.587190 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8ee3a7-b81f-4ff7-9647-2eb207531a43" containerName="nova-cell0-conductor-db-sync" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.587781 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.590340 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.592980 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5bbdg" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.606028 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.782916 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49f4v\" (UniqueName: \"kubernetes.io/projected/621c9246-ea68-42b4-b799-961af70ca4f5-kube-api-access-49f4v\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.783149 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621c9246-ea68-42b4-b799-961af70ca4f5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.783295 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621c9246-ea68-42b4-b799-961af70ca4f5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.884910 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49f4v\" (UniqueName: \"kubernetes.io/projected/621c9246-ea68-42b4-b799-961af70ca4f5-kube-api-access-49f4v\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.884978 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621c9246-ea68-42b4-b799-961af70ca4f5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.885034 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621c9246-ea68-42b4-b799-961af70ca4f5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.890126 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621c9246-ea68-42b4-b799-961af70ca4f5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.891250 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621c9246-ea68-42b4-b799-961af70ca4f5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.901205 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49f4v\" (UniqueName: \"kubernetes.io/projected/621c9246-ea68-42b4-b799-961af70ca4f5-kube-api-access-49f4v\") pod \"nova-cell0-conductor-0\" (UID: \"621c9246-ea68-42b4-b799-961af70ca4f5\") " pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:22 crc kubenswrapper[4580]: I0112 13:23:22.901679 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:23 crc kubenswrapper[4580]: I0112 13:23:23.319958 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 12 13:23:23 crc kubenswrapper[4580]: W0112 13:23:23.327969 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621c9246_ea68_42b4_b799_961af70ca4f5.slice/crio-552a11e302941ea380b81df414baa6659df3b74cd4b10f57f5a4bb4a3b5dcd4a WatchSource:0}: Error finding container 552a11e302941ea380b81df414baa6659df3b74cd4b10f57f5a4bb4a3b5dcd4a: Status 404 returned error can't find the container with id 552a11e302941ea380b81df414baa6659df3b74cd4b10f57f5a4bb4a3b5dcd4a Jan 12 13:23:23 crc kubenswrapper[4580]: I0112 13:23:23.511822 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"621c9246-ea68-42b4-b799-961af70ca4f5","Type":"ContainerStarted","Data":"5d43baa308693ea480040c281705401945757f3e7f769c588febba2efdc6c400"} Jan 12 13:23:23 crc kubenswrapper[4580]: I0112 13:23:23.511881 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"621c9246-ea68-42b4-b799-961af70ca4f5","Type":"ContainerStarted","Data":"552a11e302941ea380b81df414baa6659df3b74cd4b10f57f5a4bb4a3b5dcd4a"} Jan 12 13:23:23 crc kubenswrapper[4580]: I0112 13:23:23.512033 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:23 crc kubenswrapper[4580]: I0112 13:23:23.526410 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.526389703 podStartE2EDuration="1.526389703s" podCreationTimestamp="2026-01-12 13:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:23.525140936 +0000 UTC m=+1002.569359625" watchObservedRunningTime="2026-01-12 13:23:23.526389703 +0000 UTC m=+1002.570608393" Jan 12 13:23:32 crc kubenswrapper[4580]: I0112 13:23:32.927506 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.334871 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-fsx29"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.335995 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.342475 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.344283 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.345492 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fsx29"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.400790 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-config-data\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.400832 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.400889 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dwzg\" (UniqueName: \"kubernetes.io/projected/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-kube-api-access-5dwzg\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.400927 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-scripts\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.462799 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.464007 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.467142 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.472909 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.495944 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.499377 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.501006 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502478 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-config-data\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502578 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-scripts\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502655 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c144d2-91b1-4672-84a8-5dc673ac910f-logs\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502688 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502777 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhbj\" (UniqueName: \"kubernetes.io/projected/41c144d2-91b1-4672-84a8-5dc673ac910f-kube-api-access-4lhbj\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502800 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhtw\" (UniqueName: \"kubernetes.io/projected/2b20c518-9be8-47a0-82bd-c2886a86ce70-kube-api-access-pkhtw\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502867 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-config-data\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502885 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502929 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-config-data\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.502953 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.503019 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dwzg\" (UniqueName: \"kubernetes.io/projected/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-kube-api-access-5dwzg\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.509387 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-config-data\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.509949 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.531681 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-scripts\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.538550 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dwzg\" (UniqueName: \"kubernetes.io/projected/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-kube-api-access-5dwzg\") pod \"nova-cell0-cell-mapping-fsx29\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.542141 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.596612 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.597906 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.606229 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-config-data\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.606261 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.606308 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-config-data\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.606390 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c144d2-91b1-4672-84a8-5dc673ac910f-logs\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.606410 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.606468 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lhbj\" (UniqueName: \"kubernetes.io/projected/41c144d2-91b1-4672-84a8-5dc673ac910f-kube-api-access-4lhbj\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.606488 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhtw\" (UniqueName: \"kubernetes.io/projected/2b20c518-9be8-47a0-82bd-c2886a86ce70-kube-api-access-pkhtw\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.607705 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c144d2-91b1-4672-84a8-5dc673ac910f-logs\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.609319 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.617584 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.618047 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.618621 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-config-data\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.620678 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-config-data\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.663161 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lhbj\" (UniqueName: \"kubernetes.io/projected/41c144d2-91b1-4672-84a8-5dc673ac910f-kube-api-access-4lhbj\") pod \"nova-api-0\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.676462 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.685667 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.727944 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbg77\" (UniqueName: \"kubernetes.io/projected/43f0d1f8-fc13-40e1-a084-c53c05823f8f-kube-api-access-nbg77\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.728120 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f0d1f8-fc13-40e1-a084-c53c05823f8f-logs\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.728185 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-config-data\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.728222 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.757930 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-t89b4"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.772443 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-t89b4"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.772561 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.782839 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhtw\" (UniqueName: \"kubernetes.io/projected/2b20c518-9be8-47a0-82bd-c2886a86ce70-kube-api-access-pkhtw\") pod \"nova-scheduler-0\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.812150 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.825479 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.828965 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.836710 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.836822 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.836852 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.836958 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjfl8\" (UniqueName: \"kubernetes.io/projected/cb53c318-7cae-4f3e-8940-bb9760f21707-kube-api-access-xjfl8\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.837000 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.837070 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.837095 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbg77\" (UniqueName: \"kubernetes.io/projected/43f0d1f8-fc13-40e1-a084-c53c05823f8f-kube-api-access-nbg77\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.837560 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f0d1f8-fc13-40e1-a084-c53c05823f8f-logs\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.837602 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-config\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.837641 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-config-data\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.838034 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.838238 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f0d1f8-fc13-40e1-a084-c53c05823f8f-logs\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.848954 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-config-data\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.854881 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.884692 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbg77\" (UniqueName: \"kubernetes.io/projected/43f0d1f8-fc13-40e1-a084-c53c05823f8f-kube-api-access-nbg77\") pod \"nova-metadata-0\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " pod="openstack/nova-metadata-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.893202 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944383 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjfl8\" (UniqueName: \"kubernetes.io/projected/cb53c318-7cae-4f3e-8940-bb9760f21707-kube-api-access-xjfl8\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944449 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944518 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944564 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944637 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvsdn\" (UniqueName: \"kubernetes.io/projected/1aad4093-0475-4373-8949-a803f9ed01c5-kube-api-access-lvsdn\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944677 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-config\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944788 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944808 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.944869 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.945645 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.945663 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-config\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.946695 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.946801 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.964250 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:33 crc kubenswrapper[4580]: I0112 13:23:33.975724 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjfl8\" (UniqueName: \"kubernetes.io/projected/cb53c318-7cae-4f3e-8940-bb9760f21707-kube-api-access-xjfl8\") pod \"dnsmasq-dns-647df7b8c5-t89b4\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.047768 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.048094 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvsdn\" (UniqueName: \"kubernetes.io/projected/1aad4093-0475-4373-8949-a803f9ed01c5-kube-api-access-lvsdn\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.048290 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.056197 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.058385 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.058921 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.067163 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvsdn\" (UniqueName: \"kubernetes.io/projected/1aad4093-0475-4373-8949-a803f9ed01c5-kube-api-access-lvsdn\") pod \"nova-cell1-novncproxy-0\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.078962 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.120402 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.172796 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.378810 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fsx29"] Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.611689 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.632256 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fsx29" event={"ID":"2027dbc4-0cd9-405d-8f11-9c57de3d47e6","Type":"ContainerStarted","Data":"84298754a644471a5c1aa490dff4d6db7074699de21f30bee275f88b63306e48"} Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.632310 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fsx29" event={"ID":"2027dbc4-0cd9-405d-8f11-9c57de3d47e6","Type":"ContainerStarted","Data":"dd69bedf17f7a336c94efd8d2f053153a3b08ed7c0364027dfb9e85e59711270"} Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.649195 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-fsx29" podStartSLOduration=1.649178798 podStartE2EDuration="1.649178798s" podCreationTimestamp="2026-01-12 13:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:34.645210899 +0000 UTC m=+1013.689429589" watchObservedRunningTime="2026-01-12 13:23:34.649178798 +0000 UTC m=+1013.693397488" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.704900 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-t89b4"] Jan 12 13:23:34 crc kubenswrapper[4580]: W0112 13:23:34.709630 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb53c318_7cae_4f3e_8940_bb9760f21707.slice/crio-23ebba2695cd6b4e674655ad60159af8c9f9afbf81be24a8b1116e131572d490 WatchSource:0}: Error finding container 23ebba2695cd6b4e674655ad60159af8c9f9afbf81be24a8b1116e131572d490: Status 404 returned error can't find the container with id 23ebba2695cd6b4e674655ad60159af8c9f9afbf81be24a8b1116e131572d490 Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.710951 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.717128 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:34 crc kubenswrapper[4580]: W0112 13:23:34.737503 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b20c518_9be8_47a0_82bd_c2886a86ce70.slice/crio-22a6db4d78690a560ce0c34cba97ff65bed5a3b25cda4c195286e95ba9bbb752 WatchSource:0}: Error finding container 22a6db4d78690a560ce0c34cba97ff65bed5a3b25cda4c195286e95ba9bbb752: Status 404 returned error can't find the container with id 22a6db4d78690a560ce0c34cba97ff65bed5a3b25cda4c195286e95ba9bbb752 Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.791551 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kkg8v"] Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.792614 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.794364 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.794574 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.817576 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kkg8v"] Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.881960 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.883231 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-config-data\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.883264 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-scripts\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.883297 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rgwg\" (UniqueName: \"kubernetes.io/projected/256ef446-6309-4088-9d38-35a714a34f9a-kube-api-access-5rgwg\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.883330 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.984450 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-config-data\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.984499 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-scripts\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.984538 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rgwg\" (UniqueName: \"kubernetes.io/projected/256ef446-6309-4088-9d38-35a714a34f9a-kube-api-access-5rgwg\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.984568 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.992243 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.992335 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-config-data\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:34 crc kubenswrapper[4580]: I0112 13:23:34.992711 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-scripts\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.007692 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rgwg\" (UniqueName: \"kubernetes.io/projected/256ef446-6309-4088-9d38-35a714a34f9a-kube-api-access-5rgwg\") pod \"nova-cell1-conductor-db-sync-kkg8v\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.142155 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.580869 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kkg8v"] Jan 12 13:23:35 crc kubenswrapper[4580]: W0112 13:23:35.582442 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod256ef446_6309_4088_9d38_35a714a34f9a.slice/crio-7bf3eb6a465284a9203cd485e002a72f398e19861e02ccfec64c4f2cd38509a2 WatchSource:0}: Error finding container 7bf3eb6a465284a9203cd485e002a72f398e19861e02ccfec64c4f2cd38509a2: Status 404 returned error can't find the container with id 7bf3eb6a465284a9203cd485e002a72f398e19861e02ccfec64c4f2cd38509a2 Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.641149 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" event={"ID":"256ef446-6309-4088-9d38-35a714a34f9a","Type":"ContainerStarted","Data":"7bf3eb6a465284a9203cd485e002a72f398e19861e02ccfec64c4f2cd38509a2"} Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.647129 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1aad4093-0475-4373-8949-a803f9ed01c5","Type":"ContainerStarted","Data":"3f87d81de0f6897564281bd6f5cf925f18dac996cf3a844f7fc3391c9dc4faa0"} Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.653309 4580 generic.go:334] "Generic (PLEG): container finished" podID="cb53c318-7cae-4f3e-8940-bb9760f21707" containerID="f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7" exitCode=0 Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.653463 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" event={"ID":"cb53c318-7cae-4f3e-8940-bb9760f21707","Type":"ContainerDied","Data":"f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7"} Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.653529 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" event={"ID":"cb53c318-7cae-4f3e-8940-bb9760f21707","Type":"ContainerStarted","Data":"23ebba2695cd6b4e674655ad60159af8c9f9afbf81be24a8b1116e131572d490"} Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.654552 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41c144d2-91b1-4672-84a8-5dc673ac910f","Type":"ContainerStarted","Data":"4710f3c01273b7f0f0744ece2dfa2f2079464ac87a68e0ac2e93c2ca915f88f0"} Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.662461 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b20c518-9be8-47a0-82bd-c2886a86ce70","Type":"ContainerStarted","Data":"22a6db4d78690a560ce0c34cba97ff65bed5a3b25cda4c195286e95ba9bbb752"} Jan 12 13:23:35 crc kubenswrapper[4580]: I0112 13:23:35.666367 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43f0d1f8-fc13-40e1-a084-c53c05823f8f","Type":"ContainerStarted","Data":"88d2739f45fa4a77daca0ea3a9a0a565510440ec0ecf45283cf379ce58c475da"} Jan 12 13:23:36 crc kubenswrapper[4580]: I0112 13:23:36.680380 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" event={"ID":"256ef446-6309-4088-9d38-35a714a34f9a","Type":"ContainerStarted","Data":"a0e8aaa248322bf579c964ab28970f1fdc4d4f3798e900deee1409b0f18befee"} Jan 12 13:23:36 crc kubenswrapper[4580]: I0112 13:23:36.683030 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" event={"ID":"cb53c318-7cae-4f3e-8940-bb9760f21707","Type":"ContainerStarted","Data":"13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3"} Jan 12 13:23:36 crc kubenswrapper[4580]: I0112 13:23:36.684363 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:36 crc kubenswrapper[4580]: I0112 13:23:36.697721 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" podStartSLOduration=2.697705112 podStartE2EDuration="2.697705112s" podCreationTimestamp="2026-01-12 13:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:36.69502893 +0000 UTC m=+1015.739247621" watchObservedRunningTime="2026-01-12 13:23:36.697705112 +0000 UTC m=+1015.741923802" Jan 12 13:23:36 crc kubenswrapper[4580]: I0112 13:23:36.711732 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" podStartSLOduration=3.711711112 podStartE2EDuration="3.711711112s" podCreationTimestamp="2026-01-12 13:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:36.711698118 +0000 UTC m=+1015.755916808" watchObservedRunningTime="2026-01-12 13:23:36.711711112 +0000 UTC m=+1015.755929803" Jan 12 13:23:37 crc kubenswrapper[4580]: I0112 13:23:37.217843 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:23:37 crc kubenswrapper[4580]: I0112 13:23:37.258367 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:37 crc kubenswrapper[4580]: I0112 13:23:37.774815 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.698792 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1aad4093-0475-4373-8949-a803f9ed01c5","Type":"ContainerStarted","Data":"27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871"} Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.698888 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1aad4093-0475-4373-8949-a803f9ed01c5" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871" gracePeriod=30 Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.700686 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41c144d2-91b1-4672-84a8-5dc673ac910f","Type":"ContainerStarted","Data":"c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48"} Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.700747 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41c144d2-91b1-4672-84a8-5dc673ac910f","Type":"ContainerStarted","Data":"dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e"} Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.705687 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b20c518-9be8-47a0-82bd-c2886a86ce70","Type":"ContainerStarted","Data":"df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf"} Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.707458 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43f0d1f8-fc13-40e1-a084-c53c05823f8f","Type":"ContainerStarted","Data":"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7"} Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.707499 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43f0d1f8-fc13-40e1-a084-c53c05823f8f","Type":"ContainerStarted","Data":"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a"} Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.707666 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerName="nova-metadata-metadata" containerID="cri-o://d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7" gracePeriod=30 Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.707636 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerName="nova-metadata-log" containerID="cri-o://c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a" gracePeriod=30 Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.719853 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.577647576 podStartE2EDuration="5.719835963s" podCreationTimestamp="2026-01-12 13:23:33 +0000 UTC" firstStartedPulling="2026-01-12 13:23:34.875225743 +0000 UTC m=+1013.919444433" lastFinishedPulling="2026-01-12 13:23:38.017414139 +0000 UTC m=+1017.061632820" observedRunningTime="2026-01-12 13:23:38.714857063 +0000 UTC m=+1017.759075752" watchObservedRunningTime="2026-01-12 13:23:38.719835963 +0000 UTC m=+1017.764054654" Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.735900 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.355962239 podStartE2EDuration="5.73588291s" podCreationTimestamp="2026-01-12 13:23:33 +0000 UTC" firstStartedPulling="2026-01-12 13:23:34.639357875 +0000 UTC m=+1013.683576564" lastFinishedPulling="2026-01-12 13:23:38.019278544 +0000 UTC m=+1017.063497235" observedRunningTime="2026-01-12 13:23:38.734305946 +0000 UTC m=+1017.778524636" watchObservedRunningTime="2026-01-12 13:23:38.73588291 +0000 UTC m=+1017.780101600" Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.765365 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.478515224 podStartE2EDuration="5.765345127s" podCreationTimestamp="2026-01-12 13:23:33 +0000 UTC" firstStartedPulling="2026-01-12 13:23:34.730919837 +0000 UTC m=+1013.775138527" lastFinishedPulling="2026-01-12 13:23:38.017749741 +0000 UTC m=+1017.061968430" observedRunningTime="2026-01-12 13:23:38.75131418 +0000 UTC m=+1017.795532870" watchObservedRunningTime="2026-01-12 13:23:38.765345127 +0000 UTC m=+1017.809563808" Jan 12 13:23:38 crc kubenswrapper[4580]: I0112 13:23:38.770516 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.494502348 podStartE2EDuration="5.770509466s" podCreationTimestamp="2026-01-12 13:23:33 +0000 UTC" firstStartedPulling="2026-01-12 13:23:34.739220982 +0000 UTC m=+1013.783439673" lastFinishedPulling="2026-01-12 13:23:38.0152281 +0000 UTC m=+1017.059446791" observedRunningTime="2026-01-12 13:23:38.765608934 +0000 UTC m=+1017.809827624" watchObservedRunningTime="2026-01-12 13:23:38.770509466 +0000 UTC m=+1017.814728156" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.060633 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.061152 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.080268 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.174302 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.222032 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.307211 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbg77\" (UniqueName: \"kubernetes.io/projected/43f0d1f8-fc13-40e1-a084-c53c05823f8f-kube-api-access-nbg77\") pod \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.307256 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-combined-ca-bundle\") pod \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.307342 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-config-data\") pod \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.307538 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f0d1f8-fc13-40e1-a084-c53c05823f8f-logs\") pod \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\" (UID: \"43f0d1f8-fc13-40e1-a084-c53c05823f8f\") " Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.308059 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43f0d1f8-fc13-40e1-a084-c53c05823f8f-logs" (OuterVolumeSpecName: "logs") pod "43f0d1f8-fc13-40e1-a084-c53c05823f8f" (UID: "43f0d1f8-fc13-40e1-a084-c53c05823f8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.327620 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f0d1f8-fc13-40e1-a084-c53c05823f8f-kube-api-access-nbg77" (OuterVolumeSpecName: "kube-api-access-nbg77") pod "43f0d1f8-fc13-40e1-a084-c53c05823f8f" (UID: "43f0d1f8-fc13-40e1-a084-c53c05823f8f"). InnerVolumeSpecName "kube-api-access-nbg77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.332959 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43f0d1f8-fc13-40e1-a084-c53c05823f8f" (UID: "43f0d1f8-fc13-40e1-a084-c53c05823f8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.334677 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-config-data" (OuterVolumeSpecName: "config-data") pod "43f0d1f8-fc13-40e1-a084-c53c05823f8f" (UID: "43f0d1f8-fc13-40e1-a084-c53c05823f8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.409188 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f0d1f8-fc13-40e1-a084-c53c05823f8f-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.409222 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbg77\" (UniqueName: \"kubernetes.io/projected/43f0d1f8-fc13-40e1-a084-c53c05823f8f-kube-api-access-nbg77\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.409234 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.409244 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f0d1f8-fc13-40e1-a084-c53c05823f8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.720534 4580 generic.go:334] "Generic (PLEG): container finished" podID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerID="d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7" exitCode=0 Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.720594 4580 generic.go:334] "Generic (PLEG): container finished" podID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerID="c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a" exitCode=143 Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.722073 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.723900 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43f0d1f8-fc13-40e1-a084-c53c05823f8f","Type":"ContainerDied","Data":"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7"} Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.723994 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43f0d1f8-fc13-40e1-a084-c53c05823f8f","Type":"ContainerDied","Data":"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a"} Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.724023 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43f0d1f8-fc13-40e1-a084-c53c05823f8f","Type":"ContainerDied","Data":"88d2739f45fa4a77daca0ea3a9a0a565510440ec0ecf45283cf379ce58c475da"} Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.724053 4580 scope.go:117] "RemoveContainer" containerID="d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.750393 4580 scope.go:117] "RemoveContainer" containerID="c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.760982 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.768177 4580 scope.go:117] "RemoveContainer" containerID="d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7" Jan 12 13:23:39 crc kubenswrapper[4580]: E0112 13:23:39.768447 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7\": container with ID starting with d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7 not found: ID does not exist" containerID="d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.768495 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7"} err="failed to get container status \"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7\": rpc error: code = NotFound desc = could not find container \"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7\": container with ID starting with d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7 not found: ID does not exist" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.768520 4580 scope.go:117] "RemoveContainer" containerID="c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a" Jan 12 13:23:39 crc kubenswrapper[4580]: E0112 13:23:39.769196 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a\": container with ID starting with c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a not found: ID does not exist" containerID="c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.769225 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a"} err="failed to get container status \"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a\": rpc error: code = NotFound desc = could not find container \"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a\": container with ID starting with c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a not found: ID does not exist" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.769249 4580 scope.go:117] "RemoveContainer" containerID="d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.771981 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7"} err="failed to get container status \"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7\": rpc error: code = NotFound desc = could not find container \"d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7\": container with ID starting with d42accccfa0b91e9d8e20bd4eaeb201f08cc5b7e618d5faeb8ac63c9a82c53b7 not found: ID does not exist" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.772007 4580 scope.go:117] "RemoveContainer" containerID="c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.772441 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a"} err="failed to get container status \"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a\": rpc error: code = NotFound desc = could not find container \"c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a\": container with ID starting with c44df5a92946da65cc38bc005aac580372f3aaa1f64068b8f91db54d71fe754a not found: ID does not exist" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.775167 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.782375 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:39 crc kubenswrapper[4580]: E0112 13:23:39.782835 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerName="nova-metadata-metadata" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.782852 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerName="nova-metadata-metadata" Jan 12 13:23:39 crc kubenswrapper[4580]: E0112 13:23:39.782880 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerName="nova-metadata-log" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.782887 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerName="nova-metadata-log" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.783041 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerName="nova-metadata-metadata" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.783068 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" containerName="nova-metadata-log" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.783939 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.789266 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.789400 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.797178 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.919181 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbq58\" (UniqueName: \"kubernetes.io/projected/d4d1ddab-8a18-463e-ba71-d6204d371c2c-kube-api-access-wbq58\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.919254 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.919342 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-config-data\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.919372 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:39 crc kubenswrapper[4580]: I0112 13:23:39.919391 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d1ddab-8a18-463e-ba71-d6204d371c2c-logs\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.023616 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbq58\" (UniqueName: \"kubernetes.io/projected/d4d1ddab-8a18-463e-ba71-d6204d371c2c-kube-api-access-wbq58\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.023885 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.023947 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-config-data\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.023986 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.024004 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d1ddab-8a18-463e-ba71-d6204d371c2c-logs\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.024864 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d1ddab-8a18-463e-ba71-d6204d371c2c-logs\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.028539 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.032079 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-config-data\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.037487 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.040335 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbq58\" (UniqueName: \"kubernetes.io/projected/d4d1ddab-8a18-463e-ba71-d6204d371c2c-kube-api-access-wbq58\") pod \"nova-metadata-0\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.106607 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.593514 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:40 crc kubenswrapper[4580]: W0112 13:23:40.601343 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4d1ddab_8a18_463e_ba71_d6204d371c2c.slice/crio-1aae941f487af0f3c176a66659fe9ba819c72ba181e84c6f0703cafe672b01b2 WatchSource:0}: Error finding container 1aae941f487af0f3c176a66659fe9ba819c72ba181e84c6f0703cafe672b01b2: Status 404 returned error can't find the container with id 1aae941f487af0f3c176a66659fe9ba819c72ba181e84c6f0703cafe672b01b2 Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.739109 4580 generic.go:334] "Generic (PLEG): container finished" podID="256ef446-6309-4088-9d38-35a714a34f9a" containerID="a0e8aaa248322bf579c964ab28970f1fdc4d4f3798e900deee1409b0f18befee" exitCode=0 Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.739442 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" event={"ID":"256ef446-6309-4088-9d38-35a714a34f9a","Type":"ContainerDied","Data":"a0e8aaa248322bf579c964ab28970f1fdc4d4f3798e900deee1409b0f18befee"} Jan 12 13:23:40 crc kubenswrapper[4580]: I0112 13:23:40.741672 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4d1ddab-8a18-463e-ba71-d6204d371c2c","Type":"ContainerStarted","Data":"1aae941f487af0f3c176a66659fe9ba819c72ba181e84c6f0703cafe672b01b2"} Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.295302 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f0d1f8-fc13-40e1-a084-c53c05823f8f" path="/var/lib/kubelet/pods/43f0d1f8-fc13-40e1-a084-c53c05823f8f/volumes" Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.460195 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.460671 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2add073b-c55e-4910-a310-4ad61f763ed9" containerName="kube-state-metrics" containerID="cri-o://371b63c6fb26bd523423da3db307f5844f7a2fdd13ae32bd440809e9702fab99" gracePeriod=30 Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.750995 4580 generic.go:334] "Generic (PLEG): container finished" podID="2add073b-c55e-4910-a310-4ad61f763ed9" containerID="371b63c6fb26bd523423da3db307f5844f7a2fdd13ae32bd440809e9702fab99" exitCode=2 Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.751069 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2add073b-c55e-4910-a310-4ad61f763ed9","Type":"ContainerDied","Data":"371b63c6fb26bd523423da3db307f5844f7a2fdd13ae32bd440809e9702fab99"} Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.753265 4580 generic.go:334] "Generic (PLEG): container finished" podID="2027dbc4-0cd9-405d-8f11-9c57de3d47e6" containerID="84298754a644471a5c1aa490dff4d6db7074699de21f30bee275f88b63306e48" exitCode=0 Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.753335 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fsx29" event={"ID":"2027dbc4-0cd9-405d-8f11-9c57de3d47e6","Type":"ContainerDied","Data":"84298754a644471a5c1aa490dff4d6db7074699de21f30bee275f88b63306e48"} Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.756987 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4d1ddab-8a18-463e-ba71-d6204d371c2c","Type":"ContainerStarted","Data":"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6"} Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.757045 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4d1ddab-8a18-463e-ba71-d6204d371c2c","Type":"ContainerStarted","Data":"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626"} Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.785677 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.785660109 podStartE2EDuration="2.785660109s" podCreationTimestamp="2026-01-12 13:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:41.782779955 +0000 UTC m=+1020.826998645" watchObservedRunningTime="2026-01-12 13:23:41.785660109 +0000 UTC m=+1020.829878799" Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.908253 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.969915 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6rmg\" (UniqueName: \"kubernetes.io/projected/2add073b-c55e-4910-a310-4ad61f763ed9-kube-api-access-x6rmg\") pod \"2add073b-c55e-4910-a310-4ad61f763ed9\" (UID: \"2add073b-c55e-4910-a310-4ad61f763ed9\") " Jan 12 13:23:41 crc kubenswrapper[4580]: I0112 13:23:41.975487 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2add073b-c55e-4910-a310-4ad61f763ed9-kube-api-access-x6rmg" (OuterVolumeSpecName: "kube-api-access-x6rmg") pod "2add073b-c55e-4910-a310-4ad61f763ed9" (UID: "2add073b-c55e-4910-a310-4ad61f763ed9"). InnerVolumeSpecName "kube-api-access-x6rmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.016196 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.071051 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-scripts\") pod \"256ef446-6309-4088-9d38-35a714a34f9a\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.071159 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-config-data\") pod \"256ef446-6309-4088-9d38-35a714a34f9a\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.071216 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rgwg\" (UniqueName: \"kubernetes.io/projected/256ef446-6309-4088-9d38-35a714a34f9a-kube-api-access-5rgwg\") pod \"256ef446-6309-4088-9d38-35a714a34f9a\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.071297 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-combined-ca-bundle\") pod \"256ef446-6309-4088-9d38-35a714a34f9a\" (UID: \"256ef446-6309-4088-9d38-35a714a34f9a\") " Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.072532 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6rmg\" (UniqueName: \"kubernetes.io/projected/2add073b-c55e-4910-a310-4ad61f763ed9-kube-api-access-x6rmg\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.076424 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256ef446-6309-4088-9d38-35a714a34f9a-kube-api-access-5rgwg" (OuterVolumeSpecName: "kube-api-access-5rgwg") pod "256ef446-6309-4088-9d38-35a714a34f9a" (UID: "256ef446-6309-4088-9d38-35a714a34f9a"). InnerVolumeSpecName "kube-api-access-5rgwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.079489 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-scripts" (OuterVolumeSpecName: "scripts") pod "256ef446-6309-4088-9d38-35a714a34f9a" (UID: "256ef446-6309-4088-9d38-35a714a34f9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.108600 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-config-data" (OuterVolumeSpecName: "config-data") pod "256ef446-6309-4088-9d38-35a714a34f9a" (UID: "256ef446-6309-4088-9d38-35a714a34f9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.113250 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "256ef446-6309-4088-9d38-35a714a34f9a" (UID: "256ef446-6309-4088-9d38-35a714a34f9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.175034 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.175061 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.175070 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256ef446-6309-4088-9d38-35a714a34f9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.175080 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rgwg\" (UniqueName: \"kubernetes.io/projected/256ef446-6309-4088-9d38-35a714a34f9a-kube-api-access-5rgwg\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.766705 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.766705 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2add073b-c55e-4910-a310-4ad61f763ed9","Type":"ContainerDied","Data":"a1fe86513259d17268ad2a209b35ef779d3bc0432776ca9d0e79249685d24ec0"} Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.766886 4580 scope.go:117] "RemoveContainer" containerID="371b63c6fb26bd523423da3db307f5844f7a2fdd13ae32bd440809e9702fab99" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.769519 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" event={"ID":"256ef446-6309-4088-9d38-35a714a34f9a","Type":"ContainerDied","Data":"7bf3eb6a465284a9203cd485e002a72f398e19861e02ccfec64c4f2cd38509a2"} Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.769562 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf3eb6a465284a9203cd485e002a72f398e19861e02ccfec64c4f2cd38509a2" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.769531 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kkg8v" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.812710 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.850154 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.857623 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:23:42 crc kubenswrapper[4580]: E0112 13:23:42.858079 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2add073b-c55e-4910-a310-4ad61f763ed9" containerName="kube-state-metrics" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.858110 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2add073b-c55e-4910-a310-4ad61f763ed9" containerName="kube-state-metrics" Jan 12 13:23:42 crc kubenswrapper[4580]: E0112 13:23:42.858164 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256ef446-6309-4088-9d38-35a714a34f9a" containerName="nova-cell1-conductor-db-sync" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.858172 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="256ef446-6309-4088-9d38-35a714a34f9a" containerName="nova-cell1-conductor-db-sync" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.858355 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="256ef446-6309-4088-9d38-35a714a34f9a" containerName="nova-cell1-conductor-db-sync" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.858387 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2add073b-c55e-4910-a310-4ad61f763ed9" containerName="kube-state-metrics" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.859162 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.866664 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.866883 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.871523 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.889208 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.890644 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.893377 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.894323 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.994002 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.994360 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473becf-161f-49fe-86c0-079d4a9d80dc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.994395 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.994456 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t88j\" (UniqueName: \"kubernetes.io/projected/4651196b-71ee-434b-bb63-e77f16c744e4-kube-api-access-6t88j\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.994571 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5473becf-161f-49fe-86c0-079d4a9d80dc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.994638 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:42 crc kubenswrapper[4580]: I0112 13:23:42.994749 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g86x\" (UniqueName: \"kubernetes.io/projected/5473becf-161f-49fe-86c0-079d4a9d80dc-kube-api-access-2g86x\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.025358 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.025673 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="proxy-httpd" containerID="cri-o://8396c0b2265f5e58dbaf0876b1567665e087e08a74aa3b1d728c8cfc8a5413be" gracePeriod=30 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.025686 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="sg-core" containerID="cri-o://d018fbe05e8acc6b74837bb042a6b725901c89ef267ff742c376c35430602c7e" gracePeriod=30 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.025717 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="ceilometer-notification-agent" containerID="cri-o://1d1bc6d5ce33679b74173dde51a26640ed59de6b6e5b9b384cebc98ecc61b169" gracePeriod=30 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.025883 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="ceilometer-central-agent" containerID="cri-o://bdcae0237b0a1f5c47811c4f32c5373ed0b836ddfa608721615e29f04b4178d7" gracePeriod=30 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.097345 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5473becf-161f-49fe-86c0-079d4a9d80dc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.097414 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.097483 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g86x\" (UniqueName: \"kubernetes.io/projected/5473becf-161f-49fe-86c0-079d4a9d80dc-kube-api-access-2g86x\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.097601 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.097664 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473becf-161f-49fe-86c0-079d4a9d80dc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.097692 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.097737 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t88j\" (UniqueName: \"kubernetes.io/projected/4651196b-71ee-434b-bb63-e77f16c744e4-kube-api-access-6t88j\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.104120 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473becf-161f-49fe-86c0-079d4a9d80dc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.104655 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.104696 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.105397 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4651196b-71ee-434b-bb63-e77f16c744e4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.106637 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5473becf-161f-49fe-86c0-079d4a9d80dc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.112686 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t88j\" (UniqueName: \"kubernetes.io/projected/4651196b-71ee-434b-bb63-e77f16c744e4-kube-api-access-6t88j\") pod \"kube-state-metrics-0\" (UID: \"4651196b-71ee-434b-bb63-e77f16c744e4\") " pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.114201 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g86x\" (UniqueName: \"kubernetes.io/projected/5473becf-161f-49fe-86c0-079d4a9d80dc-kube-api-access-2g86x\") pod \"nova-cell1-conductor-0\" (UID: \"5473becf-161f-49fe-86c0-079d4a9d80dc\") " pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.176864 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.210690 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.277853 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.296376 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2add073b-c55e-4910-a310-4ad61f763ed9" path="/var/lib/kubelet/pods/2add073b-c55e-4910-a310-4ad61f763ed9/volumes" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.411671 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-combined-ca-bundle\") pod \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.411758 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dwzg\" (UniqueName: \"kubernetes.io/projected/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-kube-api-access-5dwzg\") pod \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.411827 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-config-data\") pod \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.411994 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-scripts\") pod \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\" (UID: \"2027dbc4-0cd9-405d-8f11-9c57de3d47e6\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.417345 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-kube-api-access-5dwzg" (OuterVolumeSpecName: "kube-api-access-5dwzg") pod "2027dbc4-0cd9-405d-8f11-9c57de3d47e6" (UID: "2027dbc4-0cd9-405d-8f11-9c57de3d47e6"). InnerVolumeSpecName "kube-api-access-5dwzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.419857 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-scripts" (OuterVolumeSpecName: "scripts") pod "2027dbc4-0cd9-405d-8f11-9c57de3d47e6" (UID: "2027dbc4-0cd9-405d-8f11-9c57de3d47e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.437060 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2027dbc4-0cd9-405d-8f11-9c57de3d47e6" (UID: "2027dbc4-0cd9-405d-8f11-9c57de3d47e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.439556 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-config-data" (OuterVolumeSpecName: "config-data") pod "2027dbc4-0cd9-405d-8f11-9c57de3d47e6" (UID: "2027dbc4-0cd9-405d-8f11-9c57de3d47e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.515004 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.515054 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dwzg\" (UniqueName: \"kubernetes.io/projected/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-kube-api-access-5dwzg\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.515069 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.515082 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2027dbc4-0cd9-405d-8f11-9c57de3d47e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.598443 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 12 13:23:43 crc kubenswrapper[4580]: W0112 13:23:43.598553 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4651196b_71ee_434b_bb63_e77f16c744e4.slice/crio-56b6e1a3ccf37980cee89b154369d5021c6d81efb2e2ffa7ee8c1238c637c696 WatchSource:0}: Error finding container 56b6e1a3ccf37980cee89b154369d5021c6d81efb2e2ffa7ee8c1238c637c696: Status 404 returned error can't find the container with id 56b6e1a3ccf37980cee89b154369d5021c6d81efb2e2ffa7ee8c1238c637c696 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.681163 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 12 13:23:43 crc kubenswrapper[4580]: W0112 13:23:43.706666 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5473becf_161f_49fe_86c0_079d4a9d80dc.slice/crio-47dd6d48107ff2e4baee7271fe2b65edd5376b6f1b53b1340fd17a2752172c9e WatchSource:0}: Error finding container 47dd6d48107ff2e4baee7271fe2b65edd5376b6f1b53b1340fd17a2752172c9e: Status 404 returned error can't find the container with id 47dd6d48107ff2e4baee7271fe2b65edd5376b6f1b53b1340fd17a2752172c9e Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.778333 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5473becf-161f-49fe-86c0-079d4a9d80dc","Type":"ContainerStarted","Data":"47dd6d48107ff2e4baee7271fe2b65edd5376b6f1b53b1340fd17a2752172c9e"} Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.781726 4580 generic.go:334] "Generic (PLEG): container finished" podID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerID="8396c0b2265f5e58dbaf0876b1567665e087e08a74aa3b1d728c8cfc8a5413be" exitCode=0 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.781749 4580 generic.go:334] "Generic (PLEG): container finished" podID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerID="d018fbe05e8acc6b74837bb042a6b725901c89ef267ff742c376c35430602c7e" exitCode=2 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.781757 4580 generic.go:334] "Generic (PLEG): container finished" podID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerID="1d1bc6d5ce33679b74173dde51a26640ed59de6b6e5b9b384cebc98ecc61b169" exitCode=0 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.781765 4580 generic.go:334] "Generic (PLEG): container finished" podID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerID="bdcae0237b0a1f5c47811c4f32c5373ed0b836ddfa608721615e29f04b4178d7" exitCode=0 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.781819 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerDied","Data":"8396c0b2265f5e58dbaf0876b1567665e087e08a74aa3b1d728c8cfc8a5413be"} Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.781865 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerDied","Data":"d018fbe05e8acc6b74837bb042a6b725901c89ef267ff742c376c35430602c7e"} Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.781879 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerDied","Data":"1d1bc6d5ce33679b74173dde51a26640ed59de6b6e5b9b384cebc98ecc61b169"} Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.781890 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerDied","Data":"bdcae0237b0a1f5c47811c4f32c5373ed0b836ddfa608721615e29f04b4178d7"} Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.784505 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4651196b-71ee-434b-bb63-e77f16c744e4","Type":"ContainerStarted","Data":"56b6e1a3ccf37980cee89b154369d5021c6d81efb2e2ffa7ee8c1238c637c696"} Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.787661 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fsx29" event={"ID":"2027dbc4-0cd9-405d-8f11-9c57de3d47e6","Type":"ContainerDied","Data":"dd69bedf17f7a336c94efd8d2f053153a3b08ed7c0364027dfb9e85e59711270"} Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.787688 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd69bedf17f7a336c94efd8d2f053153a3b08ed7c0364027dfb9e85e59711270" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.787734 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fsx29" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.838440 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.893579 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.893908 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.922149 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkzz8\" (UniqueName: \"kubernetes.io/projected/291e23aa-9411-437f-b6f4-153af1cb50e1-kube-api-access-qkzz8\") pod \"291e23aa-9411-437f-b6f4-153af1cb50e1\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.922246 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-run-httpd\") pod \"291e23aa-9411-437f-b6f4-153af1cb50e1\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.922372 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-scripts\") pod \"291e23aa-9411-437f-b6f4-153af1cb50e1\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.922457 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-log-httpd\") pod \"291e23aa-9411-437f-b6f4-153af1cb50e1\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.922545 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-config-data\") pod \"291e23aa-9411-437f-b6f4-153af1cb50e1\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.922572 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-combined-ca-bundle\") pod \"291e23aa-9411-437f-b6f4-153af1cb50e1\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.922590 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-sg-core-conf-yaml\") pod \"291e23aa-9411-437f-b6f4-153af1cb50e1\" (UID: \"291e23aa-9411-437f-b6f4-153af1cb50e1\") " Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.928972 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "291e23aa-9411-437f-b6f4-153af1cb50e1" (UID: "291e23aa-9411-437f-b6f4-153af1cb50e1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.930095 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "291e23aa-9411-437f-b6f4-153af1cb50e1" (UID: "291e23aa-9411-437f-b6f4-153af1cb50e1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.933658 4580 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.933697 4580 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/291e23aa-9411-437f-b6f4-153af1cb50e1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.946144 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-scripts" (OuterVolumeSpecName: "scripts") pod "291e23aa-9411-437f-b6f4-153af1cb50e1" (UID: "291e23aa-9411-437f-b6f4-153af1cb50e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.965912 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.978317 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.978595 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2b20c518-9be8-47a0-82bd-c2886a86ce70" containerName="nova-scheduler-scheduler" containerID="cri-o://df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf" gracePeriod=30 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.987230 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291e23aa-9411-437f-b6f4-153af1cb50e1-kube-api-access-qkzz8" (OuterVolumeSpecName: "kube-api-access-qkzz8") pod "291e23aa-9411-437f-b6f4-153af1cb50e1" (UID: "291e23aa-9411-437f-b6f4-153af1cb50e1"). InnerVolumeSpecName "kube-api-access-qkzz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.988597 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "291e23aa-9411-437f-b6f4-153af1cb50e1" (UID: "291e23aa-9411-437f-b6f4-153af1cb50e1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.989318 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.990322 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerName="nova-metadata-log" containerID="cri-o://40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626" gracePeriod=30 Jan 12 13:23:43 crc kubenswrapper[4580]: I0112 13:23:43.990824 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerName="nova-metadata-metadata" containerID="cri-o://a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6" gracePeriod=30 Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.035754 4580 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.035788 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkzz8\" (UniqueName: \"kubernetes.io/projected/291e23aa-9411-437f-b6f4-153af1cb50e1-kube-api-access-qkzz8\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.035800 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.065886 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "291e23aa-9411-437f-b6f4-153af1cb50e1" (UID: "291e23aa-9411-437f-b6f4-153af1cb50e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.090686 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-config-data" (OuterVolumeSpecName: "config-data") pod "291e23aa-9411-437f-b6f4-153af1cb50e1" (UID: "291e23aa-9411-437f-b6f4-153af1cb50e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.126427 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.137419 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.137448 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e23aa-9411-437f-b6f4-153af1cb50e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.198425 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-qwbpt"] Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.198639 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" podUID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" containerName="dnsmasq-dns" containerID="cri-o://2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603" gracePeriod=10 Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.508631 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.656872 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d1ddab-8a18-463e-ba71-d6204d371c2c-logs\") pod \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.657063 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-combined-ca-bundle\") pod \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.657207 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-config-data\") pod \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.657308 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-nova-metadata-tls-certs\") pod \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.657402 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbq58\" (UniqueName: \"kubernetes.io/projected/d4d1ddab-8a18-463e-ba71-d6204d371c2c-kube-api-access-wbq58\") pod \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\" (UID: \"d4d1ddab-8a18-463e-ba71-d6204d371c2c\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.657728 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d1ddab-8a18-463e-ba71-d6204d371c2c-logs" (OuterVolumeSpecName: "logs") pod "d4d1ddab-8a18-463e-ba71-d6204d371c2c" (UID: "d4d1ddab-8a18-463e-ba71-d6204d371c2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.658222 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d1ddab-8a18-463e-ba71-d6204d371c2c-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.666252 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d1ddab-8a18-463e-ba71-d6204d371c2c-kube-api-access-wbq58" (OuterVolumeSpecName: "kube-api-access-wbq58") pod "d4d1ddab-8a18-463e-ba71-d6204d371c2c" (UID: "d4d1ddab-8a18-463e-ba71-d6204d371c2c"). InnerVolumeSpecName "kube-api-access-wbq58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.671200 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.688056 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4d1ddab-8a18-463e-ba71-d6204d371c2c" (UID: "d4d1ddab-8a18-463e-ba71-d6204d371c2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.692783 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-config-data" (OuterVolumeSpecName: "config-data") pod "d4d1ddab-8a18-463e-ba71-d6204d371c2c" (UID: "d4d1ddab-8a18-463e-ba71-d6204d371c2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.724232 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d4d1ddab-8a18-463e-ba71-d6204d371c2c" (UID: "d4d1ddab-8a18-463e-ba71-d6204d371c2c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.759433 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-swift-storage-0\") pod \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.759484 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-sb\") pod \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.759609 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-svc\") pod \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.759667 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnz7f\" (UniqueName: \"kubernetes.io/projected/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-kube-api-access-wnz7f\") pod \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.759849 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-config\") pod \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.759929 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-nb\") pod \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\" (UID: \"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1\") " Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.760635 4580 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.760655 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbq58\" (UniqueName: \"kubernetes.io/projected/d4d1ddab-8a18-463e-ba71-d6204d371c2c-kube-api-access-wbq58\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.760665 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.760674 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d1ddab-8a18-463e-ba71-d6204d371c2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.763279 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-kube-api-access-wnz7f" (OuterVolumeSpecName: "kube-api-access-wnz7f") pod "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" (UID: "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1"). InnerVolumeSpecName "kube-api-access-wnz7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.828800 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-config" (OuterVolumeSpecName: "config") pod "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" (UID: "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.837524 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" (UID: "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.838714 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4651196b-71ee-434b-bb63-e77f16c744e4","Type":"ContainerStarted","Data":"7734ab4e42164ae7d10e890f563d0d59a64d0caffa550fd76e4481f3c9f42662"} Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.840174 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.845359 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" (UID: "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.848769 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5473becf-161f-49fe-86c0-079d4a9d80dc","Type":"ContainerStarted","Data":"d828803d25c8ea81f0eb813af8993bd699fe3b41661987ba12dc452a572634a4"} Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.849353 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.851294 4580 generic.go:334] "Generic (PLEG): container finished" podID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerID="a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6" exitCode=0 Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.851316 4580 generic.go:334] "Generic (PLEG): container finished" podID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerID="40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626" exitCode=143 Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.851348 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4d1ddab-8a18-463e-ba71-d6204d371c2c","Type":"ContainerDied","Data":"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6"} Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.851367 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4d1ddab-8a18-463e-ba71-d6204d371c2c","Type":"ContainerDied","Data":"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626"} Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.851376 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4d1ddab-8a18-463e-ba71-d6204d371c2c","Type":"ContainerDied","Data":"1aae941f487af0f3c176a66659fe9ba819c72ba181e84c6f0703cafe672b01b2"} Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.851393 4580 scope.go:117] "RemoveContainer" containerID="a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.851485 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.865353 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.865529 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.865541 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnz7f\" (UniqueName: \"kubernetes.io/projected/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-kube-api-access-wnz7f\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.865550 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.874735 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" (UID: "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.882432 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.614921752 podStartE2EDuration="2.882420955s" podCreationTimestamp="2026-01-12 13:23:42 +0000 UTC" firstStartedPulling="2026-01-12 13:23:43.601193029 +0000 UTC m=+1022.645411719" lastFinishedPulling="2026-01-12 13:23:43.868692232 +0000 UTC m=+1022.912910922" observedRunningTime="2026-01-12 13:23:44.874532694 +0000 UTC m=+1023.918751385" watchObservedRunningTime="2026-01-12 13:23:44.882420955 +0000 UTC m=+1023.926639645" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.885253 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" (UID: "4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.890281 4580 generic.go:334] "Generic (PLEG): container finished" podID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" containerID="2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603" exitCode=0 Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.890340 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" event={"ID":"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1","Type":"ContainerDied","Data":"2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603"} Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.890364 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" event={"ID":"4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1","Type":"ContainerDied","Data":"2dd23b7b45f1fe5070621c7d21c1c1aacd97ba09856cfe301dba35ce465b9f39"} Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.890425 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-qwbpt" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.905182 4580 scope.go:117] "RemoveContainer" containerID="40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.921506 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-log" containerID="cri-o://dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e" gracePeriod=30 Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.921787 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.922169 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"291e23aa-9411-437f-b6f4-153af1cb50e1","Type":"ContainerDied","Data":"3c3eb458baf520a7558055c4e919cc5b4a05bb3dcff076795886ef46272aa2d7"} Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.922218 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-api" containerID="cri-o://c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48" gracePeriod=30 Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.926434 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.926414249 podStartE2EDuration="2.926414249s" podCreationTimestamp="2026-01-12 13:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:44.900936161 +0000 UTC m=+1023.945154852" watchObservedRunningTime="2026-01-12 13:23:44.926414249 +0000 UTC m=+1023.970632940" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.968392 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.969957 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.969978 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.975402 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.975915 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.985170 4580 scope.go:117] "RemoveContainer" containerID="a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6" Jan 12 13:23:44 crc kubenswrapper[4580]: E0112 13:23:44.996534 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6\": container with ID starting with a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6 not found: ID does not exist" containerID="a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.996584 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6"} err="failed to get container status \"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6\": rpc error: code = NotFound desc = could not find container \"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6\": container with ID starting with a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6 not found: ID does not exist" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.996608 4580 scope.go:117] "RemoveContainer" containerID="40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626" Jan 12 13:23:44 crc kubenswrapper[4580]: E0112 13:23:44.997900 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626\": container with ID starting with 40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626 not found: ID does not exist" containerID="40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.997931 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626"} err="failed to get container status \"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626\": rpc error: code = NotFound desc = could not find container \"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626\": container with ID starting with 40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626 not found: ID does not exist" Jan 12 13:23:44 crc kubenswrapper[4580]: I0112 13:23:44.997952 4580 scope.go:117] "RemoveContainer" containerID="a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.006090 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.023160 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6"} err="failed to get container status \"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6\": rpc error: code = NotFound desc = could not find container \"a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6\": container with ID starting with a4d962c05ca4a529eeae6e4bde146f8758b7063fe588d4c194a88b0bc86098b6 not found: ID does not exist" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.023217 4580 scope.go:117] "RemoveContainer" containerID="40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.032182 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-qwbpt"] Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.032434 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626"} err="failed to get container status \"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626\": rpc error: code = NotFound desc = could not find container \"40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626\": container with ID starting with 40acbd85375d84c165e23167dd472b971cce7f2851d17886b83f9420e9386626 not found: ID does not exist" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.032477 4580 scope.go:117] "RemoveContainer" containerID="2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.052573 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-qwbpt"] Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085145 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085594 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerName="nova-metadata-metadata" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085609 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerName="nova-metadata-metadata" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085620 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="ceilometer-notification-agent" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085626 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="ceilometer-notification-agent" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085638 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="ceilometer-central-agent" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085644 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="ceilometer-central-agent" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085652 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerName="nova-metadata-log" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085658 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerName="nova-metadata-log" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085671 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" containerName="init" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085676 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" containerName="init" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085691 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="proxy-httpd" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085697 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="proxy-httpd" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085708 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" containerName="dnsmasq-dns" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085714 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" containerName="dnsmasq-dns" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085723 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2027dbc4-0cd9-405d-8f11-9c57de3d47e6" containerName="nova-manage" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085729 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2027dbc4-0cd9-405d-8f11-9c57de3d47e6" containerName="nova-manage" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.085740 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="sg-core" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085747 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="sg-core" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085944 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" containerName="dnsmasq-dns" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085958 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2027dbc4-0cd9-405d-8f11-9c57de3d47e6" containerName="nova-manage" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085966 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="proxy-httpd" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085977 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerName="nova-metadata-log" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.085987 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="sg-core" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.086001 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" containerName="nova-metadata-metadata" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.086024 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="ceilometer-central-agent" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.086034 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" containerName="ceilometer-notification-agent" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.087840 4580 scope.go:117] "RemoveContainer" containerID="c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.088034 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.092864 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.101601 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.101771 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.132927 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.162682 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.174786 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8937c039-ef58-4e1c-ac6e-719494fe812a-logs\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.175072 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.175139 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-config-data\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.175218 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.175244 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll824\" (UniqueName: \"kubernetes.io/projected/8937c039-ef58-4e1c-ac6e-719494fe812a-kube-api-access-ll824\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.179183 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.182309 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.184233 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.184518 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.184635 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.190760 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.193818 4580 scope.go:117] "RemoveContainer" containerID="2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.195826 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603\": container with ID starting with 2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603 not found: ID does not exist" containerID="2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.195861 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603"} err="failed to get container status \"2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603\": rpc error: code = NotFound desc = could not find container \"2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603\": container with ID starting with 2924035f06f9bee19fa248ac0f0423e712986ab23693fe3eef84880b0b431603 not found: ID does not exist" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.195881 4580 scope.go:117] "RemoveContainer" containerID="c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04" Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.204283 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04\": container with ID starting with c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04 not found: ID does not exist" containerID="c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.204344 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04"} err="failed to get container status \"c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04\": rpc error: code = NotFound desc = could not find container \"c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04\": container with ID starting with c472ba78858aa0446fb625f553fbcb36f6a8d4c73c32a7f7ba4b966880e70d04 not found: ID does not exist" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.204372 4580 scope.go:117] "RemoveContainer" containerID="8396c0b2265f5e58dbaf0876b1567665e087e08a74aa3b1d728c8cfc8a5413be" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.231286 4580 scope.go:117] "RemoveContainer" containerID="d018fbe05e8acc6b74837bb042a6b725901c89ef267ff742c376c35430602c7e" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.256724 4580 scope.go:117] "RemoveContainer" containerID="1d1bc6d5ce33679b74173dde51a26640ed59de6b6e5b9b384cebc98ecc61b169" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.275863 4580 scope.go:117] "RemoveContainer" containerID="bdcae0237b0a1f5c47811c4f32c5373ed0b836ddfa608721615e29f04b4178d7" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277460 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277521 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll824\" (UniqueName: \"kubernetes.io/projected/8937c039-ef58-4e1c-ac6e-719494fe812a-kube-api-access-ll824\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277556 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277609 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntk67\" (UniqueName: \"kubernetes.io/projected/591f8715-eb34-4966-80ae-34173d8546fa-kube-api-access-ntk67\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277670 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-scripts\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277695 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-run-httpd\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277724 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277757 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-config-data\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277791 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8937c039-ef58-4e1c-ac6e-719494fe812a-logs\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277853 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277914 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277935 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-log-httpd\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.277956 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-config-data\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.278688 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8937c039-ef58-4e1c-ac6e-719494fe812a-logs\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.284617 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.284983 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-config-data\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.285034 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.302670 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll824\" (UniqueName: \"kubernetes.io/projected/8937c039-ef58-4e1c-ac6e-719494fe812a-kube-api-access-ll824\") pod \"nova-metadata-0\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.307759 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291e23aa-9411-437f-b6f4-153af1cb50e1" path="/var/lib/kubelet/pods/291e23aa-9411-437f-b6f4-153af1cb50e1/volumes" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.308441 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1" path="/var/lib/kubelet/pods/4b8c0c88-3d61-4cd1-9b9e-df2ca83717f1/volumes" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.308971 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d1ddab-8a18-463e-ba71-d6204d371c2c" path="/var/lib/kubelet/pods/d4d1ddab-8a18-463e-ba71-d6204d371c2c/volumes" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.379485 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-scripts\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.379547 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-run-httpd\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.379595 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.379626 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-config-data\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.379789 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.379890 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-log-httpd\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.379986 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.380057 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntk67\" (UniqueName: \"kubernetes.io/projected/591f8715-eb34-4966-80ae-34173d8546fa-kube-api-access-ntk67\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.381041 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-log-httpd\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.382180 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-run-httpd\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.384540 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-scripts\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.385704 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.385999 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-config-data\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.386546 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.389681 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.395208 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntk67\" (UniqueName: \"kubernetes.io/projected/591f8715-eb34-4966-80ae-34173d8546fa-kube-api-access-ntk67\") pod \"ceilometer-0\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.513371 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.524507 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.830474 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.889259 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-config-data\") pod \"2b20c518-9be8-47a0-82bd-c2886a86ce70\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.889416 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-combined-ca-bundle\") pod \"2b20c518-9be8-47a0-82bd-c2886a86ce70\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.889498 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkhtw\" (UniqueName: \"kubernetes.io/projected/2b20c518-9be8-47a0-82bd-c2886a86ce70-kube-api-access-pkhtw\") pod \"2b20c518-9be8-47a0-82bd-c2886a86ce70\" (UID: \"2b20c518-9be8-47a0-82bd-c2886a86ce70\") " Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.893910 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b20c518-9be8-47a0-82bd-c2886a86ce70-kube-api-access-pkhtw" (OuterVolumeSpecName: "kube-api-access-pkhtw") pod "2b20c518-9be8-47a0-82bd-c2886a86ce70" (UID: "2b20c518-9be8-47a0-82bd-c2886a86ce70"). InnerVolumeSpecName "kube-api-access-pkhtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.925434 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-config-data" (OuterVolumeSpecName: "config-data") pod "2b20c518-9be8-47a0-82bd-c2886a86ce70" (UID: "2b20c518-9be8-47a0-82bd-c2886a86ce70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.925829 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b20c518-9be8-47a0-82bd-c2886a86ce70" (UID: "2b20c518-9be8-47a0-82bd-c2886a86ce70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.933460 4580 generic.go:334] "Generic (PLEG): container finished" podID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerID="dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e" exitCode=143 Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.933509 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41c144d2-91b1-4672-84a8-5dc673ac910f","Type":"ContainerDied","Data":"dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e"} Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.942938 4580 generic.go:334] "Generic (PLEG): container finished" podID="2b20c518-9be8-47a0-82bd-c2886a86ce70" containerID="df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf" exitCode=0 Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.942981 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.942992 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b20c518-9be8-47a0-82bd-c2886a86ce70","Type":"ContainerDied","Data":"df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf"} Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.943077 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b20c518-9be8-47a0-82bd-c2886a86ce70","Type":"ContainerDied","Data":"22a6db4d78690a560ce0c34cba97ff65bed5a3b25cda4c195286e95ba9bbb752"} Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.943126 4580 scope.go:117] "RemoveContainer" containerID="df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.978359 4580 scope.go:117] "RemoveContainer" containerID="df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.979900 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:23:45 crc kubenswrapper[4580]: E0112 13:23:45.981515 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf\": container with ID starting with df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf not found: ID does not exist" containerID="df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.981548 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf"} err="failed to get container status \"df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf\": rpc error: code = NotFound desc = could not find container \"df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf\": container with ID starting with df28dedb1b712adbe021e69c89a63caf4df30dc64bafaf14abeaf775607472bf not found: ID does not exist" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.991670 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.991698 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkhtw\" (UniqueName: \"kubernetes.io/projected/2b20c518-9be8-47a0-82bd-c2886a86ce70-kube-api-access-pkhtw\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.991708 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b20c518-9be8-47a0-82bd-c2886a86ce70-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:45 crc kubenswrapper[4580]: I0112 13:23:45.992260 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.006935 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.018239 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:46 crc kubenswrapper[4580]: E0112 13:23:46.018709 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b20c518-9be8-47a0-82bd-c2886a86ce70" containerName="nova-scheduler-scheduler" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.018726 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b20c518-9be8-47a0-82bd-c2886a86ce70" containerName="nova-scheduler-scheduler" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.018890 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b20c518-9be8-47a0-82bd-c2886a86ce70" containerName="nova-scheduler-scheduler" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.019553 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.020040 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.022544 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.091603 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.093143 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-config-data\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.093218 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9ln\" (UniqueName: \"kubernetes.io/projected/93fb417b-6d30-4426-9360-2623f77e99fb-kube-api-access-dk9ln\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.093304 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: W0112 13:23:46.104754 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod591f8715_eb34_4966_80ae_34173d8546fa.slice/crio-feb2931b2d6a3ac9e0cd9cb6652194e53eccaac85616fa7f73ff2fb2bfc41c5e WatchSource:0}: Error finding container feb2931b2d6a3ac9e0cd9cb6652194e53eccaac85616fa7f73ff2fb2bfc41c5e: Status 404 returned error can't find the container with id feb2931b2d6a3ac9e0cd9cb6652194e53eccaac85616fa7f73ff2fb2bfc41c5e Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.195439 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk9ln\" (UniqueName: \"kubernetes.io/projected/93fb417b-6d30-4426-9360-2623f77e99fb-kube-api-access-dk9ln\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.195884 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.195977 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-config-data\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.202653 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-config-data\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.202663 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.212991 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk9ln\" (UniqueName: \"kubernetes.io/projected/93fb417b-6d30-4426-9360-2623f77e99fb-kube-api-access-dk9ln\") pod \"nova-scheduler-0\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.345207 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.747239 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.949847 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.950233 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.958270 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93fb417b-6d30-4426-9360-2623f77e99fb","Type":"ContainerStarted","Data":"d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649"} Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.958306 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93fb417b-6d30-4426-9360-2623f77e99fb","Type":"ContainerStarted","Data":"a947c4f27de8596deeff6893e12b1b6405b561fb6cf9ccc7af7c6f3b62b5b3c6"} Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.963499 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8937c039-ef58-4e1c-ac6e-719494fe812a","Type":"ContainerStarted","Data":"258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2"} Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.963580 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8937c039-ef58-4e1c-ac6e-719494fe812a","Type":"ContainerStarted","Data":"adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807"} Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.963597 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8937c039-ef58-4e1c-ac6e-719494fe812a","Type":"ContainerStarted","Data":"025fb2732776de849f4c80162d0736ae655d6095c41d5602ad91844fc29f166f"} Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.964596 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerStarted","Data":"feb2931b2d6a3ac9e0cd9cb6652194e53eccaac85616fa7f73ff2fb2bfc41c5e"} Jan 12 13:23:46 crc kubenswrapper[4580]: I0112 13:23:46.992248 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.992226439 podStartE2EDuration="1.992226439s" podCreationTimestamp="2026-01-12 13:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:46.986251187 +0000 UTC m=+1026.030469877" watchObservedRunningTime="2026-01-12 13:23:46.992226439 +0000 UTC m=+1026.036445130" Jan 12 13:23:47 crc kubenswrapper[4580]: I0112 13:23:47.022606 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.022588248 podStartE2EDuration="3.022588248s" podCreationTimestamp="2026-01-12 13:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:47.012531021 +0000 UTC m=+1026.056749710" watchObservedRunningTime="2026-01-12 13:23:47.022588248 +0000 UTC m=+1026.066806938" Jan 12 13:23:47 crc kubenswrapper[4580]: I0112 13:23:47.293860 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b20c518-9be8-47a0-82bd-c2886a86ce70" path="/var/lib/kubelet/pods/2b20c518-9be8-47a0-82bd-c2886a86ce70/volumes" Jan 12 13:23:47 crc kubenswrapper[4580]: I0112 13:23:47.975656 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerStarted","Data":"705e8be649095a1cc901917b791590bade11bd048cdfc7ac9c22d877af13f1ad"} Jan 12 13:23:48 crc kubenswrapper[4580]: I0112 13:23:48.240389 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 12 13:23:48 crc kubenswrapper[4580]: I0112 13:23:48.988995 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerStarted","Data":"fa85fa209793d02491941e50e9d87ae514e1929365dc475b7c5317ba46a3f31b"} Jan 12 13:23:50 crc kubenswrapper[4580]: I0112 13:23:50.000131 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerStarted","Data":"2c000bf9ad456e0f3ba735f28d6a47c9e35186bb642f3ff13e42c7f6a08e6f3f"} Jan 12 13:23:50 crc kubenswrapper[4580]: I0112 13:23:50.514160 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 12 13:23:50 crc kubenswrapper[4580]: I0112 13:23:50.516500 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 12 13:23:50 crc kubenswrapper[4580]: I0112 13:23:50.977733 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.009256 4580 generic.go:334] "Generic (PLEG): container finished" podID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerID="c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48" exitCode=0 Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.010265 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.010640 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41c144d2-91b1-4672-84a8-5dc673ac910f","Type":"ContainerDied","Data":"c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48"} Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.010670 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41c144d2-91b1-4672-84a8-5dc673ac910f","Type":"ContainerDied","Data":"4710f3c01273b7f0f0744ece2dfa2f2079464ac87a68e0ac2e93c2ca915f88f0"} Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.010688 4580 scope.go:117] "RemoveContainer" containerID="c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.034258 4580 scope.go:117] "RemoveContainer" containerID="dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.050942 4580 scope.go:117] "RemoveContainer" containerID="c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48" Jan 12 13:23:51 crc kubenswrapper[4580]: E0112 13:23:51.051297 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48\": container with ID starting with c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48 not found: ID does not exist" containerID="c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.051347 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48"} err="failed to get container status \"c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48\": rpc error: code = NotFound desc = could not find container \"c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48\": container with ID starting with c056a2983242c19261c5b4727a70c1d136dbbff19617d6e789d470951eb6ea48 not found: ID does not exist" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.051379 4580 scope.go:117] "RemoveContainer" containerID="dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e" Jan 12 13:23:51 crc kubenswrapper[4580]: E0112 13:23:51.051745 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e\": container with ID starting with dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e not found: ID does not exist" containerID="dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.051776 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e"} err="failed to get container status \"dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e\": rpc error: code = NotFound desc = could not find container \"dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e\": container with ID starting with dcefd4e249d25243f31e309413125b3b8f17e328541d61f60629030f91ab205e not found: ID does not exist" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.122265 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-combined-ca-bundle\") pod \"41c144d2-91b1-4672-84a8-5dc673ac910f\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.122483 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-config-data\") pod \"41c144d2-91b1-4672-84a8-5dc673ac910f\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.122613 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lhbj\" (UniqueName: \"kubernetes.io/projected/41c144d2-91b1-4672-84a8-5dc673ac910f-kube-api-access-4lhbj\") pod \"41c144d2-91b1-4672-84a8-5dc673ac910f\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.122674 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c144d2-91b1-4672-84a8-5dc673ac910f-logs\") pod \"41c144d2-91b1-4672-84a8-5dc673ac910f\" (UID: \"41c144d2-91b1-4672-84a8-5dc673ac910f\") " Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.123231 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c144d2-91b1-4672-84a8-5dc673ac910f-logs" (OuterVolumeSpecName: "logs") pod "41c144d2-91b1-4672-84a8-5dc673ac910f" (UID: "41c144d2-91b1-4672-84a8-5dc673ac910f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.123556 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c144d2-91b1-4672-84a8-5dc673ac910f-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.131227 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c144d2-91b1-4672-84a8-5dc673ac910f-kube-api-access-4lhbj" (OuterVolumeSpecName: "kube-api-access-4lhbj") pod "41c144d2-91b1-4672-84a8-5dc673ac910f" (UID: "41c144d2-91b1-4672-84a8-5dc673ac910f"). InnerVolumeSpecName "kube-api-access-4lhbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.147453 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c144d2-91b1-4672-84a8-5dc673ac910f" (UID: "41c144d2-91b1-4672-84a8-5dc673ac910f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.148567 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-config-data" (OuterVolumeSpecName: "config-data") pod "41c144d2-91b1-4672-84a8-5dc673ac910f" (UID: "41c144d2-91b1-4672-84a8-5dc673ac910f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.226235 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lhbj\" (UniqueName: \"kubernetes.io/projected/41c144d2-91b1-4672-84a8-5dc673ac910f-kube-api-access-4lhbj\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.226273 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.226285 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c144d2-91b1-4672-84a8-5dc673ac910f-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.345619 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.380243 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.387272 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.392436 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:51 crc kubenswrapper[4580]: E0112 13:23:51.392962 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-api" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.393038 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-api" Jan 12 13:23:51 crc kubenswrapper[4580]: E0112 13:23:51.393136 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-log" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.393205 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-log" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.393471 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-log" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.393547 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" containerName="nova-api-api" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.394712 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.396959 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.398210 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.535784 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.535894 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-config-data\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.536447 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-logs\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.536489 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7lg\" (UniqueName: \"kubernetes.io/projected/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-kube-api-access-2w7lg\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.638297 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-logs\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.638332 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7lg\" (UniqueName: \"kubernetes.io/projected/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-kube-api-access-2w7lg\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.638397 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.638423 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-config-data\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.638660 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-logs\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.643614 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-config-data\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.650509 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.655804 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7lg\" (UniqueName: \"kubernetes.io/projected/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-kube-api-access-2w7lg\") pod \"nova-api-0\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " pod="openstack/nova-api-0" Jan 12 13:23:51 crc kubenswrapper[4580]: I0112 13:23:51.709222 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:23:52 crc kubenswrapper[4580]: I0112 13:23:52.024970 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerStarted","Data":"8dab583fa465cbcb2b0a49b497942069c6eafea80fff0b984941b4e88c62a88a"} Jan 12 13:23:52 crc kubenswrapper[4580]: I0112 13:23:52.026011 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 12 13:23:52 crc kubenswrapper[4580]: I0112 13:23:52.050179 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3039838169999998 podStartE2EDuration="7.050167883s" podCreationTimestamp="2026-01-12 13:23:45 +0000 UTC" firstStartedPulling="2026-01-12 13:23:46.107050218 +0000 UTC m=+1025.151268909" lastFinishedPulling="2026-01-12 13:23:50.853234286 +0000 UTC m=+1029.897452975" observedRunningTime="2026-01-12 13:23:52.044131164 +0000 UTC m=+1031.088349855" watchObservedRunningTime="2026-01-12 13:23:52.050167883 +0000 UTC m=+1031.094386573" Jan 12 13:23:52 crc kubenswrapper[4580]: I0112 13:23:52.145618 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:23:53 crc kubenswrapper[4580]: I0112 13:23:53.035565 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf630a28-de8c-48b8-b562-f9a7cefc0a5d","Type":"ContainerStarted","Data":"25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f"} Jan 12 13:23:53 crc kubenswrapper[4580]: I0112 13:23:53.035619 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf630a28-de8c-48b8-b562-f9a7cefc0a5d","Type":"ContainerStarted","Data":"4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9"} Jan 12 13:23:53 crc kubenswrapper[4580]: I0112 13:23:53.035633 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf630a28-de8c-48b8-b562-f9a7cefc0a5d","Type":"ContainerStarted","Data":"c0749c6a19d843524b83806aa3ad76bb957f3a38ff7cdae0dad5782e107faff1"} Jan 12 13:23:53 crc kubenswrapper[4580]: I0112 13:23:53.189054 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 12 13:23:53 crc kubenswrapper[4580]: I0112 13:23:53.205496 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.205476668 podStartE2EDuration="2.205476668s" podCreationTimestamp="2026-01-12 13:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:23:53.051589349 +0000 UTC m=+1032.095808039" watchObservedRunningTime="2026-01-12 13:23:53.205476668 +0000 UTC m=+1032.249695358" Jan 12 13:23:53 crc kubenswrapper[4580]: I0112 13:23:53.291970 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c144d2-91b1-4672-84a8-5dc673ac910f" path="/var/lib/kubelet/pods/41c144d2-91b1-4672-84a8-5dc673ac910f/volumes" Jan 12 13:23:55 crc kubenswrapper[4580]: I0112 13:23:55.514160 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 12 13:23:55 crc kubenswrapper[4580]: I0112 13:23:55.514510 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 12 13:23:56 crc kubenswrapper[4580]: I0112 13:23:56.348035 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 12 13:23:56 crc kubenswrapper[4580]: I0112 13:23:56.375008 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 12 13:23:56 crc kubenswrapper[4580]: I0112 13:23:56.533282 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 12 13:23:56 crc kubenswrapper[4580]: I0112 13:23:56.533296 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 12 13:23:57 crc kubenswrapper[4580]: I0112 13:23:57.095939 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 12 13:24:01 crc kubenswrapper[4580]: I0112 13:24:01.709427 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 12 13:24:01 crc kubenswrapper[4580]: I0112 13:24:01.710059 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 12 13:24:02 crc kubenswrapper[4580]: I0112 13:24:02.791266 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 12 13:24:02 crc kubenswrapper[4580]: I0112 13:24:02.791308 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 12 13:24:05 crc kubenswrapper[4580]: I0112 13:24:05.518372 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 12 13:24:05 crc kubenswrapper[4580]: I0112 13:24:05.519421 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 12 13:24:05 crc kubenswrapper[4580]: I0112 13:24:05.528282 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 12 13:24:06 crc kubenswrapper[4580]: I0112 13:24:06.167774 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.059898 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.120575 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-config-data\") pod \"1aad4093-0475-4373-8949-a803f9ed01c5\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.120728 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-combined-ca-bundle\") pod \"1aad4093-0475-4373-8949-a803f9ed01c5\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.120759 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvsdn\" (UniqueName: \"kubernetes.io/projected/1aad4093-0475-4373-8949-a803f9ed01c5-kube-api-access-lvsdn\") pod \"1aad4093-0475-4373-8949-a803f9ed01c5\" (UID: \"1aad4093-0475-4373-8949-a803f9ed01c5\") " Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.132385 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aad4093-0475-4373-8949-a803f9ed01c5-kube-api-access-lvsdn" (OuterVolumeSpecName: "kube-api-access-lvsdn") pod "1aad4093-0475-4373-8949-a803f9ed01c5" (UID: "1aad4093-0475-4373-8949-a803f9ed01c5"). InnerVolumeSpecName "kube-api-access-lvsdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.144473 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1aad4093-0475-4373-8949-a803f9ed01c5" (UID: "1aad4093-0475-4373-8949-a803f9ed01c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.150744 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-config-data" (OuterVolumeSpecName: "config-data") pod "1aad4093-0475-4373-8949-a803f9ed01c5" (UID: "1aad4093-0475-4373-8949-a803f9ed01c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.190603 4580 generic.go:334] "Generic (PLEG): container finished" podID="1aad4093-0475-4373-8949-a803f9ed01c5" containerID="27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871" exitCode=137 Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.190663 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1aad4093-0475-4373-8949-a803f9ed01c5","Type":"ContainerDied","Data":"27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871"} Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.190715 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1aad4093-0475-4373-8949-a803f9ed01c5","Type":"ContainerDied","Data":"3f87d81de0f6897564281bd6f5cf925f18dac996cf3a844f7fc3391c9dc4faa0"} Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.190740 4580 scope.go:117] "RemoveContainer" containerID="27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.190744 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.214385 4580 scope.go:117] "RemoveContainer" containerID="27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871" Jan 12 13:24:09 crc kubenswrapper[4580]: E0112 13:24:09.214700 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871\": container with ID starting with 27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871 not found: ID does not exist" containerID="27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.214740 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871"} err="failed to get container status \"27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871\": rpc error: code = NotFound desc = could not find container \"27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871\": container with ID starting with 27003b60ffc63239792686b9c1dd69d3f1a4e699afc0b688509bf4ad17e76871 not found: ID does not exist" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.223861 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.223888 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aad4093-0475-4373-8949-a803f9ed01c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.223970 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvsdn\" (UniqueName: \"kubernetes.io/projected/1aad4093-0475-4373-8949-a803f9ed01c5-kube-api-access-lvsdn\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.233033 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.242027 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.251904 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:24:09 crc kubenswrapper[4580]: E0112 13:24:09.252362 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aad4093-0475-4373-8949-a803f9ed01c5" containerName="nova-cell1-novncproxy-novncproxy" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.252383 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aad4093-0475-4373-8949-a803f9ed01c5" containerName="nova-cell1-novncproxy-novncproxy" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.252576 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aad4093-0475-4373-8949-a803f9ed01c5" containerName="nova-cell1-novncproxy-novncproxy" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.253246 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.255151 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.255325 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.256900 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.261187 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.290792 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aad4093-0475-4373-8949-a803f9ed01c5" path="/var/lib/kubelet/pods/1aad4093-0475-4373-8949-a803f9ed01c5/volumes" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.325926 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.326038 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.326172 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.327088 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkmms\" (UniqueName: \"kubernetes.io/projected/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-kube-api-access-zkmms\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.327188 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.431925 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkmms\" (UniqueName: \"kubernetes.io/projected/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-kube-api-access-zkmms\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.432388 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.432468 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.432591 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.432651 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.437874 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.437930 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.438054 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.438205 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.445116 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkmms\" (UniqueName: \"kubernetes.io/projected/8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f-kube-api-access-zkmms\") pod \"nova-cell1-novncproxy-0\" (UID: \"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.567524 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:09 crc kubenswrapper[4580]: I0112 13:24:09.944650 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 12 13:24:10 crc kubenswrapper[4580]: I0112 13:24:10.203816 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f","Type":"ContainerStarted","Data":"66360907cc556ae1b8265f2e70caab8525c574185731b1f002651fee8fa37483"} Jan 12 13:24:10 crc kubenswrapper[4580]: I0112 13:24:10.204207 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f","Type":"ContainerStarted","Data":"24d05f272a3c7f425561625192c43a3bf2c0bb3fba24ab40cad81053a32617db"} Jan 12 13:24:10 crc kubenswrapper[4580]: I0112 13:24:10.226258 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.226238489 podStartE2EDuration="1.226238489s" podCreationTimestamp="2026-01-12 13:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:24:10.221311677 +0000 UTC m=+1049.265530367" watchObservedRunningTime="2026-01-12 13:24:10.226238489 +0000 UTC m=+1049.270457179" Jan 12 13:24:11 crc kubenswrapper[4580]: I0112 13:24:11.712538 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 12 13:24:11 crc kubenswrapper[4580]: I0112 13:24:11.714502 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 12 13:24:11 crc kubenswrapper[4580]: I0112 13:24:11.714872 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 12 13:24:11 crc kubenswrapper[4580]: I0112 13:24:11.717287 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.220810 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.224914 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.363148 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-8cjg6"] Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.364802 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.371400 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-8cjg6"] Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.398668 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.398701 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-config\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.398734 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxmw5\" (UniqueName: \"kubernetes.io/projected/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-kube-api-access-rxmw5\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.398753 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.398784 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.399093 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.501125 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.501227 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.501256 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-config\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.501290 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxmw5\" (UniqueName: \"kubernetes.io/projected/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-kube-api-access-rxmw5\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.501312 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.501351 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.502474 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.502593 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.502592 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-config\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.502592 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.502676 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.534927 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxmw5\" (UniqueName: \"kubernetes.io/projected/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-kube-api-access-rxmw5\") pod \"dnsmasq-dns-fcd6f8f8f-8cjg6\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:12 crc kubenswrapper[4580]: I0112 13:24:12.694117 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:13 crc kubenswrapper[4580]: I0112 13:24:13.138587 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-8cjg6"] Jan 12 13:24:13 crc kubenswrapper[4580]: I0112 13:24:13.238704 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" event={"ID":"4aeeb6e1-3e0f-4a47-a133-ccfca235c552","Type":"ContainerStarted","Data":"341d2eebd19e54cd89cba85e1c7f1915b631b8f684fc296937e8e510b4f583e2"} Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.080879 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.081909 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="sg-core" containerID="cri-o://2c000bf9ad456e0f3ba735f28d6a47c9e35186bb642f3ff13e42c7f6a08e6f3f" gracePeriod=30 Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.082032 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="proxy-httpd" containerID="cri-o://8dab583fa465cbcb2b0a49b497942069c6eafea80fff0b984941b4e88c62a88a" gracePeriod=30 Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.082090 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="ceilometer-notification-agent" containerID="cri-o://fa85fa209793d02491941e50e9d87ae514e1929365dc475b7c5317ba46a3f31b" gracePeriod=30 Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.081868 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="ceilometer-central-agent" containerID="cri-o://705e8be649095a1cc901917b791590bade11bd048cdfc7ac9c22d877af13f1ad" gracePeriod=30 Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.095795 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.195:3000/\": EOF" Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.252647 4580 generic.go:334] "Generic (PLEG): container finished" podID="591f8715-eb34-4966-80ae-34173d8546fa" containerID="2c000bf9ad456e0f3ba735f28d6a47c9e35186bb642f3ff13e42c7f6a08e6f3f" exitCode=2 Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.252718 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerDied","Data":"2c000bf9ad456e0f3ba735f28d6a47c9e35186bb642f3ff13e42c7f6a08e6f3f"} Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.254129 4580 generic.go:334] "Generic (PLEG): container finished" podID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" containerID="37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f" exitCode=0 Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.255373 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" event={"ID":"4aeeb6e1-3e0f-4a47-a133-ccfca235c552","Type":"ContainerDied","Data":"37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f"} Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.458147 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:14 crc kubenswrapper[4580]: I0112 13:24:14.568430 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.264860 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" event={"ID":"4aeeb6e1-3e0f-4a47-a133-ccfca235c552","Type":"ContainerStarted","Data":"9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740"} Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.265010 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.267560 4580 generic.go:334] "Generic (PLEG): container finished" podID="591f8715-eb34-4966-80ae-34173d8546fa" containerID="8dab583fa465cbcb2b0a49b497942069c6eafea80fff0b984941b4e88c62a88a" exitCode=0 Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.267592 4580 generic.go:334] "Generic (PLEG): container finished" podID="591f8715-eb34-4966-80ae-34173d8546fa" containerID="705e8be649095a1cc901917b791590bade11bd048cdfc7ac9c22d877af13f1ad" exitCode=0 Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.267624 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerDied","Data":"8dab583fa465cbcb2b0a49b497942069c6eafea80fff0b984941b4e88c62a88a"} Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.267670 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerDied","Data":"705e8be649095a1cc901917b791590bade11bd048cdfc7ac9c22d877af13f1ad"} Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.267764 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-log" containerID="cri-o://4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9" gracePeriod=30 Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.267845 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-api" containerID="cri-o://25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f" gracePeriod=30 Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.293170 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" podStartSLOduration=3.2931556459999998 podStartE2EDuration="3.293155646s" podCreationTimestamp="2026-01-12 13:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:24:15.290163852 +0000 UTC m=+1054.334382541" watchObservedRunningTime="2026-01-12 13:24:15.293155646 +0000 UTC m=+1054.337374337" Jan 12 13:24:15 crc kubenswrapper[4580]: I0112 13:24:15.525334 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.195:3000/\": dial tcp 10.217.0.195:3000: connect: connection refused" Jan 12 13:24:16 crc kubenswrapper[4580]: I0112 13:24:16.275914 4580 generic.go:334] "Generic (PLEG): container finished" podID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerID="4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9" exitCode=143 Jan 12 13:24:16 crc kubenswrapper[4580]: I0112 13:24:16.275999 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf630a28-de8c-48b8-b562-f9a7cefc0a5d","Type":"ContainerDied","Data":"4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9"} Jan 12 13:24:16 crc kubenswrapper[4580]: I0112 13:24:16.949234 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:24:16 crc kubenswrapper[4580]: I0112 13:24:16.949320 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:24:16 crc kubenswrapper[4580]: I0112 13:24:16.949394 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:24:16 crc kubenswrapper[4580]: I0112 13:24:16.950660 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62195f179f376ea4916eddf796027fa5a80271672d3171f47fa9237f1c01b2a4"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:24:16 crc kubenswrapper[4580]: I0112 13:24:16.950741 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://62195f179f376ea4916eddf796027fa5a80271672d3171f47fa9237f1c01b2a4" gracePeriod=600 Jan 12 13:24:17 crc kubenswrapper[4580]: I0112 13:24:17.292938 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="62195f179f376ea4916eddf796027fa5a80271672d3171f47fa9237f1c01b2a4" exitCode=0 Jan 12 13:24:17 crc kubenswrapper[4580]: I0112 13:24:17.293283 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"62195f179f376ea4916eddf796027fa5a80271672d3171f47fa9237f1c01b2a4"} Jan 12 13:24:17 crc kubenswrapper[4580]: I0112 13:24:17.293331 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"0804525f520200773e09490adee4c80bb3967d1eb56f3e87d1a77a748cd87b06"} Jan 12 13:24:17 crc kubenswrapper[4580]: I0112 13:24:17.293350 4580 scope.go:117] "RemoveContainer" containerID="7850824be06012c20b6bb245ac92cc464dbe596b5ce9364073d6add3fc0a822e" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.320180 4580 generic.go:334] "Generic (PLEG): container finished" podID="591f8715-eb34-4966-80ae-34173d8546fa" containerID="fa85fa209793d02491941e50e9d87ae514e1929365dc475b7c5317ba46a3f31b" exitCode=0 Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.320504 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerDied","Data":"fa85fa209793d02491941e50e9d87ae514e1929365dc475b7c5317ba46a3f31b"} Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.548589 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.658960 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-log-httpd\") pod \"591f8715-eb34-4966-80ae-34173d8546fa\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.659068 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-scripts\") pod \"591f8715-eb34-4966-80ae-34173d8546fa\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.659158 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-run-httpd\") pod \"591f8715-eb34-4966-80ae-34173d8546fa\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.659195 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntk67\" (UniqueName: \"kubernetes.io/projected/591f8715-eb34-4966-80ae-34173d8546fa-kube-api-access-ntk67\") pod \"591f8715-eb34-4966-80ae-34173d8546fa\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.659223 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-config-data\") pod \"591f8715-eb34-4966-80ae-34173d8546fa\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.659350 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-combined-ca-bundle\") pod \"591f8715-eb34-4966-80ae-34173d8546fa\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.659374 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-sg-core-conf-yaml\") pod \"591f8715-eb34-4966-80ae-34173d8546fa\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.659471 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-ceilometer-tls-certs\") pod \"591f8715-eb34-4966-80ae-34173d8546fa\" (UID: \"591f8715-eb34-4966-80ae-34173d8546fa\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.660295 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "591f8715-eb34-4966-80ae-34173d8546fa" (UID: "591f8715-eb34-4966-80ae-34173d8546fa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.660542 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "591f8715-eb34-4966-80ae-34173d8546fa" (UID: "591f8715-eb34-4966-80ae-34173d8546fa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.673478 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-scripts" (OuterVolumeSpecName: "scripts") pod "591f8715-eb34-4966-80ae-34173d8546fa" (UID: "591f8715-eb34-4966-80ae-34173d8546fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.699278 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591f8715-eb34-4966-80ae-34173d8546fa-kube-api-access-ntk67" (OuterVolumeSpecName: "kube-api-access-ntk67") pod "591f8715-eb34-4966-80ae-34173d8546fa" (UID: "591f8715-eb34-4966-80ae-34173d8546fa"). InnerVolumeSpecName "kube-api-access-ntk67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.704723 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "591f8715-eb34-4966-80ae-34173d8546fa" (UID: "591f8715-eb34-4966-80ae-34173d8546fa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.726267 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "591f8715-eb34-4966-80ae-34173d8546fa" (UID: "591f8715-eb34-4966-80ae-34173d8546fa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.760790 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.760809 4580 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.760820 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntk67\" (UniqueName: \"kubernetes.io/projected/591f8715-eb34-4966-80ae-34173d8546fa-kube-api-access-ntk67\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.760829 4580 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.760837 4580 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.760845 4580 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/591f8715-eb34-4966-80ae-34173d8546fa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.764803 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "591f8715-eb34-4966-80ae-34173d8546fa" (UID: "591f8715-eb34-4966-80ae-34173d8546fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.776227 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-config-data" (OuterVolumeSpecName: "config-data") pod "591f8715-eb34-4966-80ae-34173d8546fa" (UID: "591f8715-eb34-4966-80ae-34173d8546fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.784088 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.862261 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-logs\") pod \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.862704 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-logs" (OuterVolumeSpecName: "logs") pod "cf630a28-de8c-48b8-b562-f9a7cefc0a5d" (UID: "cf630a28-de8c-48b8-b562-f9a7cefc0a5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.862962 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w7lg\" (UniqueName: \"kubernetes.io/projected/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-kube-api-access-2w7lg\") pod \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.863056 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-combined-ca-bundle\") pod \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.863149 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-config-data\") pod \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\" (UID: \"cf630a28-de8c-48b8-b562-f9a7cefc0a5d\") " Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.863478 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.863498 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.863507 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591f8715-eb34-4966-80ae-34173d8546fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.875198 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-kube-api-access-2w7lg" (OuterVolumeSpecName: "kube-api-access-2w7lg") pod "cf630a28-de8c-48b8-b562-f9a7cefc0a5d" (UID: "cf630a28-de8c-48b8-b562-f9a7cefc0a5d"). InnerVolumeSpecName "kube-api-access-2w7lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.893971 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf630a28-de8c-48b8-b562-f9a7cefc0a5d" (UID: "cf630a28-de8c-48b8-b562-f9a7cefc0a5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.896505 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-config-data" (OuterVolumeSpecName: "config-data") pod "cf630a28-de8c-48b8-b562-f9a7cefc0a5d" (UID: "cf630a28-de8c-48b8-b562-f9a7cefc0a5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.965756 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w7lg\" (UniqueName: \"kubernetes.io/projected/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-kube-api-access-2w7lg\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.965789 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:18 crc kubenswrapper[4580]: I0112 13:24:18.965805 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf630a28-de8c-48b8-b562-f9a7cefc0a5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.342544 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"591f8715-eb34-4966-80ae-34173d8546fa","Type":"ContainerDied","Data":"feb2931b2d6a3ac9e0cd9cb6652194e53eccaac85616fa7f73ff2fb2bfc41c5e"} Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.342598 4580 scope.go:117] "RemoveContainer" containerID="8dab583fa465cbcb2b0a49b497942069c6eafea80fff0b984941b4e88c62a88a" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.342691 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.347065 4580 generic.go:334] "Generic (PLEG): container finished" podID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerID="25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f" exitCode=0 Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.347091 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf630a28-de8c-48b8-b562-f9a7cefc0a5d","Type":"ContainerDied","Data":"25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f"} Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.347122 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cf630a28-de8c-48b8-b562-f9a7cefc0a5d","Type":"ContainerDied","Data":"c0749c6a19d843524b83806aa3ad76bb957f3a38ff7cdae0dad5782e107faff1"} Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.347160 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.365426 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.379184 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.393273 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.394339 4580 scope.go:117] "RemoveContainer" containerID="2c000bf9ad456e0f3ba735f28d6a47c9e35186bb642f3ff13e42c7f6a08e6f3f" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399263 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:24:19 crc kubenswrapper[4580]: E0112 13:24:19.399628 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="ceilometer-central-agent" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399649 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="ceilometer-central-agent" Jan 12 13:24:19 crc kubenswrapper[4580]: E0112 13:24:19.399672 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="ceilometer-notification-agent" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399679 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="ceilometer-notification-agent" Jan 12 13:24:19 crc kubenswrapper[4580]: E0112 13:24:19.399691 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="sg-core" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399697 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="sg-core" Jan 12 13:24:19 crc kubenswrapper[4580]: E0112 13:24:19.399710 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="proxy-httpd" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399716 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="proxy-httpd" Jan 12 13:24:19 crc kubenswrapper[4580]: E0112 13:24:19.399729 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-api" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399734 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-api" Jan 12 13:24:19 crc kubenswrapper[4580]: E0112 13:24:19.399748 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-log" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399754 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-log" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399936 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-api" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399947 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="ceilometer-notification-agent" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399959 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="ceilometer-central-agent" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399968 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="proxy-httpd" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399980 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" containerName="nova-api-log" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.399989 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="591f8715-eb34-4966-80ae-34173d8546fa" containerName="sg-core" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.403727 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.411912 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.412447 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.412678 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.413333 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.418646 4580 scope.go:117] "RemoveContainer" containerID="fa85fa209793d02491941e50e9d87ae514e1929365dc475b7c5317ba46a3f31b" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.454284 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.463146 4580 scope.go:117] "RemoveContainer" containerID="705e8be649095a1cc901917b791590bade11bd048cdfc7ac9c22d877af13f1ad" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.476469 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478156 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478203 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-config-data\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478224 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgv8g\" (UniqueName: \"kubernetes.io/projected/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-kube-api-access-vgv8g\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478578 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-run-httpd\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478632 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478652 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-scripts\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478777 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478937 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-log-httpd\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.478964 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.481180 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.481403 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.481718 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.497883 4580 scope.go:117] "RemoveContainer" containerID="25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.501284 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.515313 4580 scope.go:117] "RemoveContainer" containerID="4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.533439 4580 scope.go:117] "RemoveContainer" containerID="25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f" Jan 12 13:24:19 crc kubenswrapper[4580]: E0112 13:24:19.533764 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f\": container with ID starting with 25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f not found: ID does not exist" containerID="25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.533801 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f"} err="failed to get container status \"25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f\": rpc error: code = NotFound desc = could not find container \"25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f\": container with ID starting with 25af93b595945d9a5c3c495993e9367b98972c9a979f376f19d5fd669c3f2d7f not found: ID does not exist" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.533826 4580 scope.go:117] "RemoveContainer" containerID="4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9" Jan 12 13:24:19 crc kubenswrapper[4580]: E0112 13:24:19.534207 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9\": container with ID starting with 4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9 not found: ID does not exist" containerID="4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.534227 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9"} err="failed to get container status \"4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9\": rpc error: code = NotFound desc = could not find container \"4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9\": container with ID starting with 4fe630c30c1648c9801338b179aebd1961c6a27b4e504ae4a09781f6738291d9 not found: ID does not exist" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.568503 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.584790 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-log-httpd\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.584826 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.584891 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcf2b\" (UniqueName: \"kubernetes.io/projected/3485254f-60d1-42d1-90c5-655467ee5378-kube-api-access-vcf2b\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.584926 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.584974 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.584998 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-config-data\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.585032 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-public-tls-certs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.585052 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgv8g\" (UniqueName: \"kubernetes.io/projected/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-kube-api-access-vgv8g\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.585072 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.585092 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-config-data\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.585358 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-run-httpd\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.585454 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-log-httpd\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.585854 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-run-httpd\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.586614 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.586715 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-scripts\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.586878 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3485254f-60d1-42d1-90c5-655467ee5378-logs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.592042 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-config-data\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.592634 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-scripts\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.592724 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.597797 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.600332 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.602494 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgv8g\" (UniqueName: \"kubernetes.io/projected/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-kube-api-access-vgv8g\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.610829 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f9e6c7-d6cd-496f-b009-6bb336d25ebe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe\") " pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.687959 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcf2b\" (UniqueName: \"kubernetes.io/projected/3485254f-60d1-42d1-90c5-655467ee5378-kube-api-access-vcf2b\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.688041 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.688082 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-public-tls-certs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.688159 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.688184 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-config-data\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.688290 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3485254f-60d1-42d1-90c5-655467ee5378-logs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.688818 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3485254f-60d1-42d1-90c5-655467ee5378-logs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.691815 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.692643 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.692807 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-public-tls-certs\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.692999 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-config-data\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.703976 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcf2b\" (UniqueName: \"kubernetes.io/projected/3485254f-60d1-42d1-90c5-655467ee5378-kube-api-access-vcf2b\") pod \"nova-api-0\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " pod="openstack/nova-api-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.737650 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 12 13:24:19 crc kubenswrapper[4580]: I0112 13:24:19.801549 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:24:20 crc kubenswrapper[4580]: W0112 13:24:20.164884 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8f9e6c7_d6cd_496f_b009_6bb336d25ebe.slice/crio-4d7b82ab4ac91b6ab631e33f302e0a78f5369af24a80e7c37e5e0147f9b1f146 WatchSource:0}: Error finding container 4d7b82ab4ac91b6ab631e33f302e0a78f5369af24a80e7c37e5e0147f9b1f146: Status 404 returned error can't find the container with id 4d7b82ab4ac91b6ab631e33f302e0a78f5369af24a80e7c37e5e0147f9b1f146 Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.166127 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 12 13:24:20 crc kubenswrapper[4580]: W0112 13:24:20.269154 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3485254f_60d1_42d1_90c5_655467ee5378.slice/crio-c0234d64e79b21f0a0c1a635c8183e6cc22b921855cbb6d56991332f401f4c1c WatchSource:0}: Error finding container c0234d64e79b21f0a0c1a635c8183e6cc22b921855cbb6d56991332f401f4c1c: Status 404 returned error can't find the container with id c0234d64e79b21f0a0c1a635c8183e6cc22b921855cbb6d56991332f401f4c1c Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.271400 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.358235 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe","Type":"ContainerStarted","Data":"4d7b82ab4ac91b6ab631e33f302e0a78f5369af24a80e7c37e5e0147f9b1f146"} Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.364429 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3485254f-60d1-42d1-90c5-655467ee5378","Type":"ContainerStarted","Data":"c0234d64e79b21f0a0c1a635c8183e6cc22b921855cbb6d56991332f401f4c1c"} Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.388213 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.564479 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-l68ww"] Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.566016 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.569550 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.569775 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.579062 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l68ww"] Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.611383 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqsn\" (UniqueName: \"kubernetes.io/projected/f59d6742-fed6-4732-9cde-29dc74e47db0-kube-api-access-lvqsn\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.611470 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.611692 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-config-data\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.611759 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-scripts\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.713865 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-scripts\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.713959 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvqsn\" (UniqueName: \"kubernetes.io/projected/f59d6742-fed6-4732-9cde-29dc74e47db0-kube-api-access-lvqsn\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.714003 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.714096 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-config-data\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.718194 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-config-data\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.718419 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.718553 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-scripts\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.727296 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvqsn\" (UniqueName: \"kubernetes.io/projected/f59d6742-fed6-4732-9cde-29dc74e47db0-kube-api-access-lvqsn\") pod \"nova-cell1-cell-mapping-l68ww\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:20 crc kubenswrapper[4580]: I0112 13:24:20.901037 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:21 crc kubenswrapper[4580]: I0112 13:24:21.291438 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591f8715-eb34-4966-80ae-34173d8546fa" path="/var/lib/kubelet/pods/591f8715-eb34-4966-80ae-34173d8546fa/volumes" Jan 12 13:24:21 crc kubenswrapper[4580]: I0112 13:24:21.292634 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf630a28-de8c-48b8-b562-f9a7cefc0a5d" path="/var/lib/kubelet/pods/cf630a28-de8c-48b8-b562-f9a7cefc0a5d/volumes" Jan 12 13:24:21 crc kubenswrapper[4580]: I0112 13:24:21.313088 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l68ww"] Jan 12 13:24:21 crc kubenswrapper[4580]: W0112 13:24:21.323499 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf59d6742_fed6_4732_9cde_29dc74e47db0.slice/crio-1d0efc039ef6082ba90cf147257334cc227655423ab3bde644c7315ae503c0d9 WatchSource:0}: Error finding container 1d0efc039ef6082ba90cf147257334cc227655423ab3bde644c7315ae503c0d9: Status 404 returned error can't find the container with id 1d0efc039ef6082ba90cf147257334cc227655423ab3bde644c7315ae503c0d9 Jan 12 13:24:21 crc kubenswrapper[4580]: I0112 13:24:21.375040 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe","Type":"ContainerStarted","Data":"0ce0cce6b30b8867025ef9b0fe54cec01f98a2903b94af85a0286ad07b187549"} Jan 12 13:24:21 crc kubenswrapper[4580]: I0112 13:24:21.376070 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l68ww" event={"ID":"f59d6742-fed6-4732-9cde-29dc74e47db0","Type":"ContainerStarted","Data":"1d0efc039ef6082ba90cf147257334cc227655423ab3bde644c7315ae503c0d9"} Jan 12 13:24:21 crc kubenswrapper[4580]: I0112 13:24:21.378267 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3485254f-60d1-42d1-90c5-655467ee5378","Type":"ContainerStarted","Data":"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3"} Jan 12 13:24:21 crc kubenswrapper[4580]: I0112 13:24:21.378293 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3485254f-60d1-42d1-90c5-655467ee5378","Type":"ContainerStarted","Data":"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e"} Jan 12 13:24:22 crc kubenswrapper[4580]: I0112 13:24:22.389474 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe","Type":"ContainerStarted","Data":"ef2339222c2ff5916d6425f362709f40ace922521204d48e2049c8ab5e6c7e16"} Jan 12 13:24:22 crc kubenswrapper[4580]: I0112 13:24:22.393283 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l68ww" event={"ID":"f59d6742-fed6-4732-9cde-29dc74e47db0","Type":"ContainerStarted","Data":"8f7295b81ba9cf2ace0c7d22ce6150d4bf072b15a2eeaff67c2fa09828517fcc"} Jan 12 13:24:22 crc kubenswrapper[4580]: I0112 13:24:22.411154 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-l68ww" podStartSLOduration=2.411128091 podStartE2EDuration="2.411128091s" podCreationTimestamp="2026-01-12 13:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:24:22.408277663 +0000 UTC m=+1061.452496353" watchObservedRunningTime="2026-01-12 13:24:22.411128091 +0000 UTC m=+1061.455346782" Jan 12 13:24:22 crc kubenswrapper[4580]: I0112 13:24:22.426992 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.426969633 podStartE2EDuration="3.426969633s" podCreationTimestamp="2026-01-12 13:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:24:21.400325821 +0000 UTC m=+1060.444544511" watchObservedRunningTime="2026-01-12 13:24:22.426969633 +0000 UTC m=+1061.471188313" Jan 12 13:24:22 crc kubenswrapper[4580]: I0112 13:24:22.696486 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:24:22 crc kubenswrapper[4580]: I0112 13:24:22.751870 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-t89b4"] Jan 12 13:24:22 crc kubenswrapper[4580]: I0112 13:24:22.752087 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" podUID="cb53c318-7cae-4f3e-8940-bb9760f21707" containerName="dnsmasq-dns" containerID="cri-o://13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3" gracePeriod=10 Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.154001 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.267742 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-config\") pod \"cb53c318-7cae-4f3e-8940-bb9760f21707\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.267815 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjfl8\" (UniqueName: \"kubernetes.io/projected/cb53c318-7cae-4f3e-8940-bb9760f21707-kube-api-access-xjfl8\") pod \"cb53c318-7cae-4f3e-8940-bb9760f21707\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.267884 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-swift-storage-0\") pod \"cb53c318-7cae-4f3e-8940-bb9760f21707\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.267930 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-nb\") pod \"cb53c318-7cae-4f3e-8940-bb9760f21707\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.268044 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-sb\") pod \"cb53c318-7cae-4f3e-8940-bb9760f21707\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.268083 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-svc\") pod \"cb53c318-7cae-4f3e-8940-bb9760f21707\" (UID: \"cb53c318-7cae-4f3e-8940-bb9760f21707\") " Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.273467 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb53c318-7cae-4f3e-8940-bb9760f21707-kube-api-access-xjfl8" (OuterVolumeSpecName: "kube-api-access-xjfl8") pod "cb53c318-7cae-4f3e-8940-bb9760f21707" (UID: "cb53c318-7cae-4f3e-8940-bb9760f21707"). InnerVolumeSpecName "kube-api-access-xjfl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.307908 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb53c318-7cae-4f3e-8940-bb9760f21707" (UID: "cb53c318-7cae-4f3e-8940-bb9760f21707"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.309756 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb53c318-7cae-4f3e-8940-bb9760f21707" (UID: "cb53c318-7cae-4f3e-8940-bb9760f21707"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.313438 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb53c318-7cae-4f3e-8940-bb9760f21707" (UID: "cb53c318-7cae-4f3e-8940-bb9760f21707"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.319010 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-config" (OuterVolumeSpecName: "config") pod "cb53c318-7cae-4f3e-8940-bb9760f21707" (UID: "cb53c318-7cae-4f3e-8940-bb9760f21707"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.324096 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb53c318-7cae-4f3e-8940-bb9760f21707" (UID: "cb53c318-7cae-4f3e-8940-bb9760f21707"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.370623 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.370652 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.370662 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.370674 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.370685 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb53c318-7cae-4f3e-8940-bb9760f21707-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.370693 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjfl8\" (UniqueName: \"kubernetes.io/projected/cb53c318-7cae-4f3e-8940-bb9760f21707-kube-api-access-xjfl8\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.404748 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe","Type":"ContainerStarted","Data":"9ba15ba313d2013321643de2479698532053c0545599e760eb6f5873d62aeae3"} Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.406978 4580 generic.go:334] "Generic (PLEG): container finished" podID="cb53c318-7cae-4f3e-8940-bb9760f21707" containerID="13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3" exitCode=0 Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.407137 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.407136 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" event={"ID":"cb53c318-7cae-4f3e-8940-bb9760f21707","Type":"ContainerDied","Data":"13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3"} Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.407359 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-t89b4" event={"ID":"cb53c318-7cae-4f3e-8940-bb9760f21707","Type":"ContainerDied","Data":"23ebba2695cd6b4e674655ad60159af8c9f9afbf81be24a8b1116e131572d490"} Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.407401 4580 scope.go:117] "RemoveContainer" containerID="13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.434335 4580 scope.go:117] "RemoveContainer" containerID="f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.449750 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-t89b4"] Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.456546 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-t89b4"] Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.460664 4580 scope.go:117] "RemoveContainer" containerID="13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3" Jan 12 13:24:23 crc kubenswrapper[4580]: E0112 13:24:23.461078 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3\": container with ID starting with 13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3 not found: ID does not exist" containerID="13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.461129 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3"} err="failed to get container status \"13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3\": rpc error: code = NotFound desc = could not find container \"13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3\": container with ID starting with 13d9780c5ff88578fe9ac3dba0e3f71e80b43bb6d80b8f196e6ec90a77ca1ae3 not found: ID does not exist" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.461171 4580 scope.go:117] "RemoveContainer" containerID="f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7" Jan 12 13:24:23 crc kubenswrapper[4580]: E0112 13:24:23.461523 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7\": container with ID starting with f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7 not found: ID does not exist" containerID="f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7" Jan 12 13:24:23 crc kubenswrapper[4580]: I0112 13:24:23.461577 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7"} err="failed to get container status \"f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7\": rpc error: code = NotFound desc = could not find container \"f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7\": container with ID starting with f856c9c76a57e4428f0051665eea0737458fca3ab65e0c086a76fbd80168bda7 not found: ID does not exist" Jan 12 13:24:24 crc kubenswrapper[4580]: I0112 13:24:24.418552 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8f9e6c7-d6cd-496f-b009-6bb336d25ebe","Type":"ContainerStarted","Data":"5a6f233407bc36aa1d6c47e1c70c8eaf08f22d963ce7a1826972e17e9bbee9de"} Jan 12 13:24:24 crc kubenswrapper[4580]: I0112 13:24:24.420312 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 12 13:24:24 crc kubenswrapper[4580]: I0112 13:24:24.438217 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5567807839999999 podStartE2EDuration="5.438203038s" podCreationTimestamp="2026-01-12 13:24:19 +0000 UTC" firstStartedPulling="2026-01-12 13:24:20.167526447 +0000 UTC m=+1059.211745138" lastFinishedPulling="2026-01-12 13:24:24.048948702 +0000 UTC m=+1063.093167392" observedRunningTime="2026-01-12 13:24:24.437045662 +0000 UTC m=+1063.481264352" watchObservedRunningTime="2026-01-12 13:24:24.438203038 +0000 UTC m=+1063.482421728" Jan 12 13:24:25 crc kubenswrapper[4580]: I0112 13:24:25.340667 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb53c318-7cae-4f3e-8940-bb9760f21707" path="/var/lib/kubelet/pods/cb53c318-7cae-4f3e-8940-bb9760f21707/volumes" Jan 12 13:24:26 crc kubenswrapper[4580]: I0112 13:24:26.439199 4580 generic.go:334] "Generic (PLEG): container finished" podID="f59d6742-fed6-4732-9cde-29dc74e47db0" containerID="8f7295b81ba9cf2ace0c7d22ce6150d4bf072b15a2eeaff67c2fa09828517fcc" exitCode=0 Jan 12 13:24:26 crc kubenswrapper[4580]: I0112 13:24:26.439373 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l68ww" event={"ID":"f59d6742-fed6-4732-9cde-29dc74e47db0","Type":"ContainerDied","Data":"8f7295b81ba9cf2ace0c7d22ce6150d4bf072b15a2eeaff67c2fa09828517fcc"} Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.762159 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.879852 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvqsn\" (UniqueName: \"kubernetes.io/projected/f59d6742-fed6-4732-9cde-29dc74e47db0-kube-api-access-lvqsn\") pod \"f59d6742-fed6-4732-9cde-29dc74e47db0\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.879990 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-config-data\") pod \"f59d6742-fed6-4732-9cde-29dc74e47db0\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.880202 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-scripts\") pod \"f59d6742-fed6-4732-9cde-29dc74e47db0\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.881013 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-combined-ca-bundle\") pod \"f59d6742-fed6-4732-9cde-29dc74e47db0\" (UID: \"f59d6742-fed6-4732-9cde-29dc74e47db0\") " Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.886461 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f59d6742-fed6-4732-9cde-29dc74e47db0-kube-api-access-lvqsn" (OuterVolumeSpecName: "kube-api-access-lvqsn") pod "f59d6742-fed6-4732-9cde-29dc74e47db0" (UID: "f59d6742-fed6-4732-9cde-29dc74e47db0"). InnerVolumeSpecName "kube-api-access-lvqsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.886893 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-scripts" (OuterVolumeSpecName: "scripts") pod "f59d6742-fed6-4732-9cde-29dc74e47db0" (UID: "f59d6742-fed6-4732-9cde-29dc74e47db0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.906431 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f59d6742-fed6-4732-9cde-29dc74e47db0" (UID: "f59d6742-fed6-4732-9cde-29dc74e47db0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.907013 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-config-data" (OuterVolumeSpecName: "config-data") pod "f59d6742-fed6-4732-9cde-29dc74e47db0" (UID: "f59d6742-fed6-4732-9cde-29dc74e47db0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.983169 4580 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-scripts\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.983213 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.983227 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvqsn\" (UniqueName: \"kubernetes.io/projected/f59d6742-fed6-4732-9cde-29dc74e47db0-kube-api-access-lvqsn\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:27 crc kubenswrapper[4580]: I0112 13:24:27.983235 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59d6742-fed6-4732-9cde-29dc74e47db0-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.461336 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l68ww" event={"ID":"f59d6742-fed6-4732-9cde-29dc74e47db0","Type":"ContainerDied","Data":"1d0efc039ef6082ba90cf147257334cc227655423ab3bde644c7315ae503c0d9"} Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.461381 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d0efc039ef6082ba90cf147257334cc227655423ab3bde644c7315ae503c0d9" Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.461457 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l68ww" Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.633374 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.633658 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3485254f-60d1-42d1-90c5-655467ee5378" containerName="nova-api-log" containerID="cri-o://cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e" gracePeriod=30 Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.633830 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3485254f-60d1-42d1-90c5-655467ee5378" containerName="nova-api-api" containerID="cri-o://44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3" gracePeriod=30 Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.642463 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.642689 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="93fb417b-6d30-4426-9360-2623f77e99fb" containerName="nova-scheduler-scheduler" containerID="cri-o://d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649" gracePeriod=30 Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.653449 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.653778 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-metadata" containerID="cri-o://258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2" gracePeriod=30 Jan 12 13:24:28 crc kubenswrapper[4580]: I0112 13:24:28.653696 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-log" containerID="cri-o://adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807" gracePeriod=30 Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.162321 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.226930 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-combined-ca-bundle\") pod \"3485254f-60d1-42d1-90c5-655467ee5378\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.227155 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-internal-tls-certs\") pod \"3485254f-60d1-42d1-90c5-655467ee5378\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.227224 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-public-tls-certs\") pod \"3485254f-60d1-42d1-90c5-655467ee5378\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.227284 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-config-data\") pod \"3485254f-60d1-42d1-90c5-655467ee5378\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.227361 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3485254f-60d1-42d1-90c5-655467ee5378-logs\") pod \"3485254f-60d1-42d1-90c5-655467ee5378\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.227415 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcf2b\" (UniqueName: \"kubernetes.io/projected/3485254f-60d1-42d1-90c5-655467ee5378-kube-api-access-vcf2b\") pod \"3485254f-60d1-42d1-90c5-655467ee5378\" (UID: \"3485254f-60d1-42d1-90c5-655467ee5378\") " Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.230838 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3485254f-60d1-42d1-90c5-655467ee5378-logs" (OuterVolumeSpecName: "logs") pod "3485254f-60d1-42d1-90c5-655467ee5378" (UID: "3485254f-60d1-42d1-90c5-655467ee5378"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.231515 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3485254f-60d1-42d1-90c5-655467ee5378-kube-api-access-vcf2b" (OuterVolumeSpecName: "kube-api-access-vcf2b") pod "3485254f-60d1-42d1-90c5-655467ee5378" (UID: "3485254f-60d1-42d1-90c5-655467ee5378"). InnerVolumeSpecName "kube-api-access-vcf2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.250193 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-config-data" (OuterVolumeSpecName: "config-data") pod "3485254f-60d1-42d1-90c5-655467ee5378" (UID: "3485254f-60d1-42d1-90c5-655467ee5378"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.251684 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3485254f-60d1-42d1-90c5-655467ee5378" (UID: "3485254f-60d1-42d1-90c5-655467ee5378"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.267351 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3485254f-60d1-42d1-90c5-655467ee5378" (UID: "3485254f-60d1-42d1-90c5-655467ee5378"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.273489 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3485254f-60d1-42d1-90c5-655467ee5378" (UID: "3485254f-60d1-42d1-90c5-655467ee5378"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.329011 4580 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.329050 4580 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.329060 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.329068 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3485254f-60d1-42d1-90c5-655467ee5378-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.329077 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcf2b\" (UniqueName: \"kubernetes.io/projected/3485254f-60d1-42d1-90c5-655467ee5378-kube-api-access-vcf2b\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.329087 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3485254f-60d1-42d1-90c5-655467ee5378-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.471638 4580 generic.go:334] "Generic (PLEG): container finished" podID="3485254f-60d1-42d1-90c5-655467ee5378" containerID="44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3" exitCode=0 Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.471672 4580 generic.go:334] "Generic (PLEG): container finished" podID="3485254f-60d1-42d1-90c5-655467ee5378" containerID="cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e" exitCode=143 Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.471704 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.471721 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3485254f-60d1-42d1-90c5-655467ee5378","Type":"ContainerDied","Data":"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3"} Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.471750 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3485254f-60d1-42d1-90c5-655467ee5378","Type":"ContainerDied","Data":"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e"} Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.471760 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3485254f-60d1-42d1-90c5-655467ee5378","Type":"ContainerDied","Data":"c0234d64e79b21f0a0c1a635c8183e6cc22b921855cbb6d56991332f401f4c1c"} Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.471777 4580 scope.go:117] "RemoveContainer" containerID="44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.474355 4580 generic.go:334] "Generic (PLEG): container finished" podID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerID="adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807" exitCode=143 Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.474373 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8937c039-ef58-4e1c-ac6e-719494fe812a","Type":"ContainerDied","Data":"adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807"} Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.493042 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.505957 4580 scope.go:117] "RemoveContainer" containerID="cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.514198 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.522983 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:29 crc kubenswrapper[4580]: E0112 13:24:29.523500 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb53c318-7cae-4f3e-8940-bb9760f21707" containerName="init" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523522 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb53c318-7cae-4f3e-8940-bb9760f21707" containerName="init" Jan 12 13:24:29 crc kubenswrapper[4580]: E0112 13:24:29.523533 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb53c318-7cae-4f3e-8940-bb9760f21707" containerName="dnsmasq-dns" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523541 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb53c318-7cae-4f3e-8940-bb9760f21707" containerName="dnsmasq-dns" Jan 12 13:24:29 crc kubenswrapper[4580]: E0112 13:24:29.523566 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3485254f-60d1-42d1-90c5-655467ee5378" containerName="nova-api-log" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523572 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3485254f-60d1-42d1-90c5-655467ee5378" containerName="nova-api-log" Jan 12 13:24:29 crc kubenswrapper[4580]: E0112 13:24:29.523584 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59d6742-fed6-4732-9cde-29dc74e47db0" containerName="nova-manage" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523590 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59d6742-fed6-4732-9cde-29dc74e47db0" containerName="nova-manage" Jan 12 13:24:29 crc kubenswrapper[4580]: E0112 13:24:29.523601 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3485254f-60d1-42d1-90c5-655467ee5378" containerName="nova-api-api" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523609 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3485254f-60d1-42d1-90c5-655467ee5378" containerName="nova-api-api" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523780 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59d6742-fed6-4732-9cde-29dc74e47db0" containerName="nova-manage" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523790 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="3485254f-60d1-42d1-90c5-655467ee5378" containerName="nova-api-log" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523799 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="3485254f-60d1-42d1-90c5-655467ee5378" containerName="nova-api-api" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.523808 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb53c318-7cae-4f3e-8940-bb9760f21707" containerName="dnsmasq-dns" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.524711 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.531379 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.542223 4580 scope.go:117] "RemoveContainer" containerID="44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.542581 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.543044 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.543527 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 12 13:24:29 crc kubenswrapper[4580]: E0112 13:24:29.546431 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3\": container with ID starting with 44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3 not found: ID does not exist" containerID="44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.546487 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3"} err="failed to get container status \"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3\": rpc error: code = NotFound desc = could not find container \"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3\": container with ID starting with 44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3 not found: ID does not exist" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.546519 4580 scope.go:117] "RemoveContainer" containerID="cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e" Jan 12 13:24:29 crc kubenswrapper[4580]: E0112 13:24:29.546823 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e\": container with ID starting with cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e not found: ID does not exist" containerID="cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.546855 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e"} err="failed to get container status \"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e\": rpc error: code = NotFound desc = could not find container \"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e\": container with ID starting with cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e not found: ID does not exist" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.546878 4580 scope.go:117] "RemoveContainer" containerID="44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.547133 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3"} err="failed to get container status \"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3\": rpc error: code = NotFound desc = could not find container \"44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3\": container with ID starting with 44a9ba1cb937eea66f9aa499ae831de6421b4c497ee27b517d087d6b74096ff3 not found: ID does not exist" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.547151 4580 scope.go:117] "RemoveContainer" containerID="cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.547335 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e"} err="failed to get container status \"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e\": rpc error: code = NotFound desc = could not find container \"cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e\": container with ID starting with cf7bcace12f614e5ee0b6f77f22a9d5ad49524befff936efc45141f9bfca421e not found: ID does not exist" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.635341 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftw9j\" (UniqueName: \"kubernetes.io/projected/af33dae1-afd6-4b08-a507-64373650c025-kube-api-access-ftw9j\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.635407 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-internal-tls-certs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.635473 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af33dae1-afd6-4b08-a507-64373650c025-logs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.635568 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-config-data\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.635668 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.635724 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-public-tls-certs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.738194 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.738303 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-public-tls-certs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.739275 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftw9j\" (UniqueName: \"kubernetes.io/projected/af33dae1-afd6-4b08-a507-64373650c025-kube-api-access-ftw9j\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.739381 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-internal-tls-certs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.739464 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af33dae1-afd6-4b08-a507-64373650c025-logs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.739511 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-config-data\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.739963 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af33dae1-afd6-4b08-a507-64373650c025-logs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.745298 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-config-data\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.745719 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-internal-tls-certs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.745868 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-public-tls-certs\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.746050 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af33dae1-afd6-4b08-a507-64373650c025-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.757826 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftw9j\" (UniqueName: \"kubernetes.io/projected/af33dae1-afd6-4b08-a507-64373650c025-kube-api-access-ftw9j\") pod \"nova-api-0\" (UID: \"af33dae1-afd6-4b08-a507-64373650c025\") " pod="openstack/nova-api-0" Jan 12 13:24:29 crc kubenswrapper[4580]: I0112 13:24:29.860617 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.197830 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.247526 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-config-data\") pod \"93fb417b-6d30-4426-9360-2623f77e99fb\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.247832 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk9ln\" (UniqueName: \"kubernetes.io/projected/93fb417b-6d30-4426-9360-2623f77e99fb-kube-api-access-dk9ln\") pod \"93fb417b-6d30-4426-9360-2623f77e99fb\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.247906 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-combined-ca-bundle\") pod \"93fb417b-6d30-4426-9360-2623f77e99fb\" (UID: \"93fb417b-6d30-4426-9360-2623f77e99fb\") " Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.251513 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fb417b-6d30-4426-9360-2623f77e99fb-kube-api-access-dk9ln" (OuterVolumeSpecName: "kube-api-access-dk9ln") pod "93fb417b-6d30-4426-9360-2623f77e99fb" (UID: "93fb417b-6d30-4426-9360-2623f77e99fb"). InnerVolumeSpecName "kube-api-access-dk9ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.269081 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93fb417b-6d30-4426-9360-2623f77e99fb" (UID: "93fb417b-6d30-4426-9360-2623f77e99fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.270425 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-config-data" (OuterVolumeSpecName: "config-data") pod "93fb417b-6d30-4426-9360-2623f77e99fb" (UID: "93fb417b-6d30-4426-9360-2623f77e99fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.340421 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 12 13:24:30 crc kubenswrapper[4580]: W0112 13:24:30.342628 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf33dae1_afd6_4b08_a507_64373650c025.slice/crio-108e28537b46560a0906606f5afc10c4968b517cf1b45302b72496e578a6f17f WatchSource:0}: Error finding container 108e28537b46560a0906606f5afc10c4968b517cf1b45302b72496e578a6f17f: Status 404 returned error can't find the container with id 108e28537b46560a0906606f5afc10c4968b517cf1b45302b72496e578a6f17f Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.351898 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.351926 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fb417b-6d30-4426-9360-2623f77e99fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.351941 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk9ln\" (UniqueName: \"kubernetes.io/projected/93fb417b-6d30-4426-9360-2623f77e99fb-kube-api-access-dk9ln\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.490626 4580 generic.go:334] "Generic (PLEG): container finished" podID="93fb417b-6d30-4426-9360-2623f77e99fb" containerID="d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649" exitCode=0 Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.490972 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.491265 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93fb417b-6d30-4426-9360-2623f77e99fb","Type":"ContainerDied","Data":"d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649"} Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.491325 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93fb417b-6d30-4426-9360-2623f77e99fb","Type":"ContainerDied","Data":"a947c4f27de8596deeff6893e12b1b6405b561fb6cf9ccc7af7c6f3b62b5b3c6"} Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.491346 4580 scope.go:117] "RemoveContainer" containerID="d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.496992 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af33dae1-afd6-4b08-a507-64373650c025","Type":"ContainerStarted","Data":"f946489ecedde4a49e54bba0f6074643ae21066e84843b2b8a83886121b518e7"} Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.497070 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af33dae1-afd6-4b08-a507-64373650c025","Type":"ContainerStarted","Data":"108e28537b46560a0906606f5afc10c4968b517cf1b45302b72496e578a6f17f"} Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.550881 4580 scope.go:117] "RemoveContainer" containerID="d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.551535 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:24:30 crc kubenswrapper[4580]: E0112 13:24:30.552452 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649\": container with ID starting with d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649 not found: ID does not exist" containerID="d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.552503 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649"} err="failed to get container status \"d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649\": rpc error: code = NotFound desc = could not find container \"d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649\": container with ID starting with d2cde6fe998087e525d855ebf4c978e3b2742bd7ac2d825c5ca1d7608bcb7649 not found: ID does not exist" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.565048 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.572164 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:24:30 crc kubenswrapper[4580]: E0112 13:24:30.572643 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93fb417b-6d30-4426-9360-2623f77e99fb" containerName="nova-scheduler-scheduler" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.572662 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fb417b-6d30-4426-9360-2623f77e99fb" containerName="nova-scheduler-scheduler" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.572888 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="93fb417b-6d30-4426-9360-2623f77e99fb" containerName="nova-scheduler-scheduler" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.573800 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.576717 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.581739 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.662892 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66tr\" (UniqueName: \"kubernetes.io/projected/2a896fc4-1b0f-4186-a168-437fd8a099ea-kube-api-access-l66tr\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.663129 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a896fc4-1b0f-4186-a168-437fd8a099ea-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.663242 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a896fc4-1b0f-4186-a168-437fd8a099ea-config-data\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.765471 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a896fc4-1b0f-4186-a168-437fd8a099ea-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.765549 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a896fc4-1b0f-4186-a168-437fd8a099ea-config-data\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.765718 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66tr\" (UniqueName: \"kubernetes.io/projected/2a896fc4-1b0f-4186-a168-437fd8a099ea-kube-api-access-l66tr\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.769539 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a896fc4-1b0f-4186-a168-437fd8a099ea-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.769853 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a896fc4-1b0f-4186-a168-437fd8a099ea-config-data\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.781001 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66tr\" (UniqueName: \"kubernetes.io/projected/2a896fc4-1b0f-4186-a168-437fd8a099ea-kube-api-access-l66tr\") pod \"nova-scheduler-0\" (UID: \"2a896fc4-1b0f-4186-a168-437fd8a099ea\") " pod="openstack/nova-scheduler-0" Jan 12 13:24:30 crc kubenswrapper[4580]: I0112 13:24:30.893157 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.293009 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3485254f-60d1-42d1-90c5-655467ee5378" path="/var/lib/kubelet/pods/3485254f-60d1-42d1-90c5-655467ee5378/volumes" Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.294331 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fb417b-6d30-4426-9360-2623f77e99fb" path="/var/lib/kubelet/pods/93fb417b-6d30-4426-9360-2623f77e99fb/volumes" Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.295251 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 12 13:24:31 crc kubenswrapper[4580]: W0112 13:24:31.298135 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a896fc4_1b0f_4186_a168_437fd8a099ea.slice/crio-fe9252030effb18362c0ae8e7ce9367b878513805b7d86fe3525d7dfa2ceee3e WatchSource:0}: Error finding container fe9252030effb18362c0ae8e7ce9367b878513805b7d86fe3525d7dfa2ceee3e: Status 404 returned error can't find the container with id fe9252030effb18362c0ae8e7ce9367b878513805b7d86fe3525d7dfa2ceee3e Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.511158 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af33dae1-afd6-4b08-a507-64373650c025","Type":"ContainerStarted","Data":"9aa1e50d1838fc4cd373fe1f63ac4b410c89c54ecf14931c3d6c938f9ad70e85"} Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.516566 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a896fc4-1b0f-4186-a168-437fd8a099ea","Type":"ContainerStarted","Data":"94e0fced04fa852143455b8fe175795ab79a07ed36c0a7305027c5b9c0e28021"} Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.516632 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a896fc4-1b0f-4186-a168-437fd8a099ea","Type":"ContainerStarted","Data":"fe9252030effb18362c0ae8e7ce9367b878513805b7d86fe3525d7dfa2ceee3e"} Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.528963 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.528946695 podStartE2EDuration="2.528946695s" podCreationTimestamp="2026-01-12 13:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:24:31.526253091 +0000 UTC m=+1070.570471781" watchObservedRunningTime="2026-01-12 13:24:31.528946695 +0000 UTC m=+1070.573165386" Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.544153 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.5441401780000001 podStartE2EDuration="1.544140178s" podCreationTimestamp="2026-01-12 13:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:24:31.540041443 +0000 UTC m=+1070.584260133" watchObservedRunningTime="2026-01-12 13:24:31.544140178 +0000 UTC m=+1070.588358869" Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.796753 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:50362->10.217.0.194:8775: read: connection reset by peer" Jan 12 13:24:31 crc kubenswrapper[4580]: I0112 13:24:31.796771 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:50376->10.217.0.194:8775: read: connection reset by peer" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.201739 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.301667 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-nova-metadata-tls-certs\") pod \"8937c039-ef58-4e1c-ac6e-719494fe812a\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.301737 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll824\" (UniqueName: \"kubernetes.io/projected/8937c039-ef58-4e1c-ac6e-719494fe812a-kube-api-access-ll824\") pod \"8937c039-ef58-4e1c-ac6e-719494fe812a\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.301776 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8937c039-ef58-4e1c-ac6e-719494fe812a-logs\") pod \"8937c039-ef58-4e1c-ac6e-719494fe812a\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.301812 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-combined-ca-bundle\") pod \"8937c039-ef58-4e1c-ac6e-719494fe812a\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.302125 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-config-data\") pod \"8937c039-ef58-4e1c-ac6e-719494fe812a\" (UID: \"8937c039-ef58-4e1c-ac6e-719494fe812a\") " Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.302275 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8937c039-ef58-4e1c-ac6e-719494fe812a-logs" (OuterVolumeSpecName: "logs") pod "8937c039-ef58-4e1c-ac6e-719494fe812a" (UID: "8937c039-ef58-4e1c-ac6e-719494fe812a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.302870 4580 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8937c039-ef58-4e1c-ac6e-719494fe812a-logs\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.306386 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8937c039-ef58-4e1c-ac6e-719494fe812a-kube-api-access-ll824" (OuterVolumeSpecName: "kube-api-access-ll824") pod "8937c039-ef58-4e1c-ac6e-719494fe812a" (UID: "8937c039-ef58-4e1c-ac6e-719494fe812a"). InnerVolumeSpecName "kube-api-access-ll824". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.337955 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-config-data" (OuterVolumeSpecName: "config-data") pod "8937c039-ef58-4e1c-ac6e-719494fe812a" (UID: "8937c039-ef58-4e1c-ac6e-719494fe812a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.339029 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8937c039-ef58-4e1c-ac6e-719494fe812a" (UID: "8937c039-ef58-4e1c-ac6e-719494fe812a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.363063 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8937c039-ef58-4e1c-ac6e-719494fe812a" (UID: "8937c039-ef58-4e1c-ac6e-719494fe812a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.405332 4580 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.405360 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll824\" (UniqueName: \"kubernetes.io/projected/8937c039-ef58-4e1c-ac6e-719494fe812a-kube-api-access-ll824\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.405370 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.405379 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8937c039-ef58-4e1c-ac6e-719494fe812a-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.526924 4580 generic.go:334] "Generic (PLEG): container finished" podID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerID="258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2" exitCode=0 Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.526999 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.527030 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8937c039-ef58-4e1c-ac6e-719494fe812a","Type":"ContainerDied","Data":"258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2"} Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.527087 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8937c039-ef58-4e1c-ac6e-719494fe812a","Type":"ContainerDied","Data":"025fb2732776de849f4c80162d0736ae655d6095c41d5602ad91844fc29f166f"} Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.527117 4580 scope.go:117] "RemoveContainer" containerID="258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.554265 4580 scope.go:117] "RemoveContainer" containerID="adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.560929 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.571033 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.574818 4580 scope.go:117] "RemoveContainer" containerID="258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2" Jan 12 13:24:32 crc kubenswrapper[4580]: E0112 13:24:32.575864 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2\": container with ID starting with 258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2 not found: ID does not exist" containerID="258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.575913 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2"} err="failed to get container status \"258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2\": rpc error: code = NotFound desc = could not find container \"258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2\": container with ID starting with 258d8c7ef3c13567f43c0b5a656985856a0e8a5464d2008520117f105296fbe2 not found: ID does not exist" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.575942 4580 scope.go:117] "RemoveContainer" containerID="adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807" Jan 12 13:24:32 crc kubenswrapper[4580]: E0112 13:24:32.576290 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807\": container with ID starting with adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807 not found: ID does not exist" containerID="adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.576316 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807"} err="failed to get container status \"adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807\": rpc error: code = NotFound desc = could not find container \"adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807\": container with ID starting with adcb15e39b50b2f4d811be499fd5edbeffc425e880997968576b35ea0b6b9807 not found: ID does not exist" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.579388 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:24:32 crc kubenswrapper[4580]: E0112 13:24:32.579896 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-metadata" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.579920 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-metadata" Jan 12 13:24:32 crc kubenswrapper[4580]: E0112 13:24:32.579930 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-log" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.579938 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-log" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.580279 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-metadata" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.580300 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" containerName="nova-metadata-log" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.581422 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.583210 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.583352 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.585214 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.712277 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.712394 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-429z4\" (UniqueName: \"kubernetes.io/projected/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-kube-api-access-429z4\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.712491 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-config-data\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.712665 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-logs\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.712859 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.814472 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-logs\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.814527 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.814604 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.814651 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-429z4\" (UniqueName: \"kubernetes.io/projected/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-kube-api-access-429z4\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.814697 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-config-data\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.814959 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-logs\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.819954 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.823769 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-config-data\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.826410 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.830749 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-429z4\" (UniqueName: \"kubernetes.io/projected/35b40e0a-79b7-4ca2-8aa2-f6de40c60088-kube-api-access-429z4\") pod \"nova-metadata-0\" (UID: \"35b40e0a-79b7-4ca2-8aa2-f6de40c60088\") " pod="openstack/nova-metadata-0" Jan 12 13:24:32 crc kubenswrapper[4580]: I0112 13:24:32.906540 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 12 13:24:33 crc kubenswrapper[4580]: I0112 13:24:33.300221 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8937c039-ef58-4e1c-ac6e-719494fe812a" path="/var/lib/kubelet/pods/8937c039-ef58-4e1c-ac6e-719494fe812a/volumes" Jan 12 13:24:33 crc kubenswrapper[4580]: I0112 13:24:33.308474 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 12 13:24:33 crc kubenswrapper[4580]: W0112 13:24:33.317237 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35b40e0a_79b7_4ca2_8aa2_f6de40c60088.slice/crio-9d6f8064c923c3b46e3a6ae7e146bf7da43b5e077f5c8cbde60f6b36cf31f523 WatchSource:0}: Error finding container 9d6f8064c923c3b46e3a6ae7e146bf7da43b5e077f5c8cbde60f6b36cf31f523: Status 404 returned error can't find the container with id 9d6f8064c923c3b46e3a6ae7e146bf7da43b5e077f5c8cbde60f6b36cf31f523 Jan 12 13:24:33 crc kubenswrapper[4580]: I0112 13:24:33.538975 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35b40e0a-79b7-4ca2-8aa2-f6de40c60088","Type":"ContainerStarted","Data":"9d6f8064c923c3b46e3a6ae7e146bf7da43b5e077f5c8cbde60f6b36cf31f523"} Jan 12 13:24:34 crc kubenswrapper[4580]: I0112 13:24:34.548229 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35b40e0a-79b7-4ca2-8aa2-f6de40c60088","Type":"ContainerStarted","Data":"9ce3db9741ffd4fb2aff181aef927c933db1b99dabcaac7216bd3d886f5894b9"} Jan 12 13:24:34 crc kubenswrapper[4580]: I0112 13:24:34.548521 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35b40e0a-79b7-4ca2-8aa2-f6de40c60088","Type":"ContainerStarted","Data":"e662078b7b2988b85844a1ded85da107710addd30d016c65cf11e80524218158"} Jan 12 13:24:34 crc kubenswrapper[4580]: I0112 13:24:34.568540 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.568529107 podStartE2EDuration="2.568529107s" podCreationTimestamp="2026-01-12 13:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:24:34.564709276 +0000 UTC m=+1073.608927966" watchObservedRunningTime="2026-01-12 13:24:34.568529107 +0000 UTC m=+1073.612747797" Jan 12 13:24:35 crc kubenswrapper[4580]: I0112 13:24:35.893877 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 12 13:24:37 crc kubenswrapper[4580]: I0112 13:24:37.907871 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 12 13:24:37 crc kubenswrapper[4580]: I0112 13:24:37.908402 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 12 13:24:39 crc kubenswrapper[4580]: I0112 13:24:39.861581 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 12 13:24:39 crc kubenswrapper[4580]: I0112 13:24:39.861944 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 12 13:24:40 crc kubenswrapper[4580]: I0112 13:24:40.874222 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af33dae1-afd6-4b08-a507-64373650c025" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 12 13:24:40 crc kubenswrapper[4580]: I0112 13:24:40.874248 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af33dae1-afd6-4b08-a507-64373650c025" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 12 13:24:40 crc kubenswrapper[4580]: I0112 13:24:40.893341 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 12 13:24:40 crc kubenswrapper[4580]: I0112 13:24:40.927079 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 12 13:24:41 crc kubenswrapper[4580]: I0112 13:24:41.654909 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 12 13:24:42 crc kubenswrapper[4580]: I0112 13:24:42.907331 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 12 13:24:42 crc kubenswrapper[4580]: I0112 13:24:42.907401 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 12 13:24:43 crc kubenswrapper[4580]: I0112 13:24:43.922219 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35b40e0a-79b7-4ca2-8aa2-f6de40c60088" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 12 13:24:43 crc kubenswrapper[4580]: I0112 13:24:43.922254 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35b40e0a-79b7-4ca2-8aa2-f6de40c60088" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 12 13:24:49 crc kubenswrapper[4580]: I0112 13:24:49.745450 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 12 13:24:49 crc kubenswrapper[4580]: I0112 13:24:49.867435 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 12 13:24:49 crc kubenswrapper[4580]: I0112 13:24:49.867926 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 12 13:24:49 crc kubenswrapper[4580]: I0112 13:24:49.867966 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 12 13:24:49 crc kubenswrapper[4580]: I0112 13:24:49.873404 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 12 13:24:50 crc kubenswrapper[4580]: I0112 13:24:50.711854 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 12 13:24:50 crc kubenswrapper[4580]: I0112 13:24:50.716883 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 12 13:24:52 crc kubenswrapper[4580]: I0112 13:24:52.910900 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 12 13:24:52 crc kubenswrapper[4580]: I0112 13:24:52.913899 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 12 13:24:52 crc kubenswrapper[4580]: I0112 13:24:52.915301 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 12 13:24:53 crc kubenswrapper[4580]: I0112 13:24:53.746445 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 12 13:25:00 crc kubenswrapper[4580]: I0112 13:25:00.220394 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:25:00 crc kubenswrapper[4580]: I0112 13:25:00.850371 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:25:04 crc kubenswrapper[4580]: I0112 13:25:04.249892 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="20148d96-39b6-4278-9d29-91874ad352a0" containerName="rabbitmq" containerID="cri-o://e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1" gracePeriod=604796 Jan 12 13:25:05 crc kubenswrapper[4580]: I0112 13:25:05.335956 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerName="rabbitmq" containerID="cri-o://001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63" gracePeriod=604796 Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.626962 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8595b94875-hcxfj"] Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.636249 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.641220 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.650218 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-hcxfj"] Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.661009 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.719764 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.719848 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-config\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.719978 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-svc\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822334 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-plugins\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822386 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-server-conf\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822463 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/20148d96-39b6-4278-9d29-91874ad352a0-pod-info\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822492 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-confd\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822513 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822544 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-config-data\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822588 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-plugins-conf\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822655 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/20148d96-39b6-4278-9d29-91874ad352a0-erlang-cookie-secret\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822769 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-erlang-cookie\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822791 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmjh7\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-kube-api-access-wmjh7\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.822845 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-tls\") pod \"20148d96-39b6-4278-9d29-91874ad352a0\" (UID: \"20148d96-39b6-4278-9d29-91874ad352a0\") " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.823485 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.823820 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.823846 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.823898 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.823934 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-svc\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.824021 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.824122 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.824439 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.824572 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-config\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.824708 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4kc2\" (UniqueName: \"kubernetes.io/projected/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-kube-api-access-d4kc2\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.824807 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-svc\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.825083 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-swift-storage-0\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.825413 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-config\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.825643 4580 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.825664 4580 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.825681 4580 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.827057 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.828567 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/20148d96-39b6-4278-9d29-91874ad352a0-pod-info" (OuterVolumeSpecName: "pod-info") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.828581 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20148d96-39b6-4278-9d29-91874ad352a0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.828989 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.830298 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-kube-api-access-wmjh7" (OuterVolumeSpecName: "kube-api-access-wmjh7") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "kube-api-access-wmjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.845507 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-config-data" (OuterVolumeSpecName: "config-data") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.872174 4580 generic.go:334] "Generic (PLEG): container finished" podID="20148d96-39b6-4278-9d29-91874ad352a0" containerID="e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1" exitCode=0 Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.872231 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20148d96-39b6-4278-9d29-91874ad352a0","Type":"ContainerDied","Data":"e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1"} Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.872237 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.872256 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20148d96-39b6-4278-9d29-91874ad352a0","Type":"ContainerDied","Data":"50f5b488f68dbc1636d0b5fb334646b3801bd70073fe6cafe1a627b2deb23c59"} Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.872273 4580 scope.go:117] "RemoveContainer" containerID="e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.878617 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-server-conf" (OuterVolumeSpecName: "server-conf") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.912406 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "20148d96-39b6-4278-9d29-91874ad352a0" (UID: "20148d96-39b6-4278-9d29-91874ad352a0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.918149 4580 scope.go:117] "RemoveContainer" containerID="a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926650 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926690 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926722 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926797 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4kc2\" (UniqueName: \"kubernetes.io/projected/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-kube-api-access-d4kc2\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926870 4580 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-server-conf\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926885 4580 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/20148d96-39b6-4278-9d29-91874ad352a0-pod-info\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926894 4580 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926915 4580 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926925 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20148d96-39b6-4278-9d29-91874ad352a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926933 4580 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/20148d96-39b6-4278-9d29-91874ad352a0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926941 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmjh7\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-kube-api-access-wmjh7\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.926949 4580 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/20148d96-39b6-4278-9d29-91874ad352a0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.927942 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-sb\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.928696 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-openstack-edpm-ipam\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.929091 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-nb\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.944024 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4kc2\" (UniqueName: \"kubernetes.io/projected/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-kube-api-access-d4kc2\") pod \"dnsmasq-dns-8595b94875-hcxfj\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.945409 4580 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.955389 4580 scope.go:117] "RemoveContainer" containerID="e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1" Jan 12 13:25:10 crc kubenswrapper[4580]: E0112 13:25:10.955888 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1\": container with ID starting with e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1 not found: ID does not exist" containerID="e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.955926 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1"} err="failed to get container status \"e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1\": rpc error: code = NotFound desc = could not find container \"e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1\": container with ID starting with e787e404277a17310b6d3fc406e5bd60d452400706e67595752efc3b8fbb76b1 not found: ID does not exist" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.955951 4580 scope.go:117] "RemoveContainer" containerID="a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d" Jan 12 13:25:10 crc kubenswrapper[4580]: E0112 13:25:10.956399 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d\": container with ID starting with a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d not found: ID does not exist" containerID="a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.956427 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d"} err="failed to get container status \"a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d\": rpc error: code = NotFound desc = could not find container \"a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d\": container with ID starting with a8f1963647ca5448a3a66557a4f17a1971d8dc98b5a61c6d9104b58063c1f65d not found: ID does not exist" Jan 12 13:25:10 crc kubenswrapper[4580]: I0112 13:25:10.973986 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.027842 4580 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.206863 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.222438 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.230398 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:25:11 crc kubenswrapper[4580]: E0112 13:25:11.230796 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20148d96-39b6-4278-9d29-91874ad352a0" containerName="rabbitmq" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.230815 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="20148d96-39b6-4278-9d29-91874ad352a0" containerName="rabbitmq" Jan 12 13:25:11 crc kubenswrapper[4580]: E0112 13:25:11.230827 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20148d96-39b6-4278-9d29-91874ad352a0" containerName="setup-container" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.230834 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="20148d96-39b6-4278-9d29-91874ad352a0" containerName="setup-container" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.230992 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="20148d96-39b6-4278-9d29-91874ad352a0" containerName="rabbitmq" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.241610 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.248617 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.248818 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.249026 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.249201 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.249346 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lwp97" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.249495 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.249659 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.259798 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.296771 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20148d96-39b6-4278-9d29-91874ad352a0" path="/var/lib/kubelet/pods/20148d96-39b6-4278-9d29-91874ad352a0/volumes" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.341923 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d6b817-38ed-4b91-b375-d0b358eaab0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.341983 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.342077 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d6b817-38ed-4b91-b375-d0b358eaab0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.342136 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.342184 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.342268 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.342290 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.342320 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.342945 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.343007 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7mt\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-kube-api-access-mw7mt\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.343088 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.379337 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-hcxfj"] Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444026 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444142 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444176 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7mt\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-kube-api-access-mw7mt\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444210 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444242 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d6b817-38ed-4b91-b375-d0b358eaab0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444261 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444309 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d6b817-38ed-4b91-b375-d0b358eaab0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444330 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444377 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444413 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444433 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444580 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.444731 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.445293 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.445806 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.445935 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.445931 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/45d6b817-38ed-4b91-b375-d0b358eaab0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.447747 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.456115 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/45d6b817-38ed-4b91-b375-d0b358eaab0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.456348 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.467475 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/45d6b817-38ed-4b91-b375-d0b358eaab0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.468087 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7mt\" (UniqueName: \"kubernetes.io/projected/45d6b817-38ed-4b91-b375-d0b358eaab0b-kube-api-access-mw7mt\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.479998 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"45d6b817-38ed-4b91-b375-d0b358eaab0b\") " pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.561691 4580 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.564265 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.820054 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851062 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbf9n\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-kube-api-access-kbf9n\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851097 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-config-data\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851200 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ee1d970-f295-46eb-91eb-70a45cb019c1-pod-info\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851232 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-plugins\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851297 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-confd\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851319 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851350 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-server-conf\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851401 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-tls\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851422 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ee1d970-f295-46eb-91eb-70a45cb019c1-erlang-cookie-secret\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851452 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-plugins-conf\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.851480 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-erlang-cookie\") pod \"3ee1d970-f295-46eb-91eb-70a45cb019c1\" (UID: \"3ee1d970-f295-46eb-91eb-70a45cb019c1\") " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.852941 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.856477 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.857159 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.859743 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.862198 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3ee1d970-f295-46eb-91eb-70a45cb019c1-pod-info" (OuterVolumeSpecName: "pod-info") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.863527 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.863608 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-kube-api-access-kbf9n" (OuterVolumeSpecName: "kube-api-access-kbf9n") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "kube-api-access-kbf9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.866435 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ee1d970-f295-46eb-91eb-70a45cb019c1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.884618 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-config-data" (OuterVolumeSpecName: "config-data") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.897710 4580 generic.go:334] "Generic (PLEG): container finished" podID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" containerID="61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704" exitCode=0 Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.898081 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" event={"ID":"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8","Type":"ContainerDied","Data":"61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704"} Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.898160 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" event={"ID":"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8","Type":"ContainerStarted","Data":"667da56b4d8e02592ed14e56e177073bc91ca55bbd43b7d8594331dce5af7055"} Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.901328 4580 generic.go:334] "Generic (PLEG): container finished" podID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerID="001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63" exitCode=0 Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.901477 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.901861 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ee1d970-f295-46eb-91eb-70a45cb019c1","Type":"ContainerDied","Data":"001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63"} Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.901900 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3ee1d970-f295-46eb-91eb-70a45cb019c1","Type":"ContainerDied","Data":"3ee7ccc08d6d3f74a64e0ea4b5c6c8b94eea1d3df6aed27a38ad313194c05745"} Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.901920 4580 scope.go:117] "RemoveContainer" containerID="001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.926429 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-server-conf" (OuterVolumeSpecName: "server-conf") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.927444 4580 scope.go:117] "RemoveContainer" containerID="ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.944077 4580 scope.go:117] "RemoveContainer" containerID="001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63" Jan 12 13:25:11 crc kubenswrapper[4580]: E0112 13:25:11.944782 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63\": container with ID starting with 001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63 not found: ID does not exist" containerID="001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.944822 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63"} err="failed to get container status \"001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63\": rpc error: code = NotFound desc = could not find container \"001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63\": container with ID starting with 001d01df38a9f8e9e63e125e1f0994b1221a148609b9db25d723f7509c2f0f63 not found: ID does not exist" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.944848 4580 scope.go:117] "RemoveContainer" containerID="ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434" Jan 12 13:25:11 crc kubenswrapper[4580]: E0112 13:25:11.945139 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434\": container with ID starting with ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434 not found: ID does not exist" containerID="ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.945172 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434"} err="failed to get container status \"ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434\": rpc error: code = NotFound desc = could not find container \"ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434\": container with ID starting with ad622e021763f9e9794c3006e074b40d78c5b65f75aa93709f23683693c29434 not found: ID does not exist" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.952936 4580 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.952956 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbf9n\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-kube-api-access-kbf9n\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.952967 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.952978 4580 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ee1d970-f295-46eb-91eb-70a45cb019c1-pod-info\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.952986 4580 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.953006 4580 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.953014 4580 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-server-conf\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.953022 4580 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.953029 4580 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ee1d970-f295-46eb-91eb-70a45cb019c1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.953045 4580 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ee1d970-f295-46eb-91eb-70a45cb019c1-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.964296 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3ee1d970-f295-46eb-91eb-70a45cb019c1" (UID: "3ee1d970-f295-46eb-91eb-70a45cb019c1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:25:11 crc kubenswrapper[4580]: I0112 13:25:11.969496 4580 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.017323 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 12 13:25:12 crc kubenswrapper[4580]: W0112 13:25:12.023613 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45d6b817_38ed_4b91_b375_d0b358eaab0b.slice/crio-5b0568b71f1079904e5885c4871dc1ae627d3f3fd814304a01936499a7d663d5 WatchSource:0}: Error finding container 5b0568b71f1079904e5885c4871dc1ae627d3f3fd814304a01936499a7d663d5: Status 404 returned error can't find the container with id 5b0568b71f1079904e5885c4871dc1ae627d3f3fd814304a01936499a7d663d5 Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.055897 4580 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ee1d970-f295-46eb-91eb-70a45cb019c1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.055948 4580 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.231211 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.241011 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.253056 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:25:12 crc kubenswrapper[4580]: E0112 13:25:12.253522 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerName="rabbitmq" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.253546 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerName="rabbitmq" Jan 12 13:25:12 crc kubenswrapper[4580]: E0112 13:25:12.253563 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerName="setup-container" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.253571 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerName="setup-container" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.253747 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee1d970-f295-46eb-91eb-70a45cb019c1" containerName="rabbitmq" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.254815 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.260151 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.260520 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.260574 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5mn6v" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.261362 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.261364 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.262062 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.262578 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.265345 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.361706 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.361753 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.361825 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c7dd413-5eac-4da3-ba06-0917a412956d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.361841 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.361864 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.361971 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.361984 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.362055 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.362073 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glgl\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-kube-api-access-2glgl\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.362090 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c7dd413-5eac-4da3-ba06-0917a412956d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.362138 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.463913 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c7dd413-5eac-4da3-ba06-0917a412956d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464276 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464359 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464475 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464550 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464643 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464721 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2glgl\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-kube-api-access-2glgl\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464783 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c7dd413-5eac-4da3-ba06-0917a412956d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464852 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464926 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.465024 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.465140 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.464781 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.465313 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.465939 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.466256 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.466280 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c7dd413-5eac-4da3-ba06-0917a412956d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.468295 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c7dd413-5eac-4da3-ba06-0917a412956d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.469193 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.469574 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c7dd413-5eac-4da3-ba06-0917a412956d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.475922 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.481181 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glgl\" (UniqueName: \"kubernetes.io/projected/4c7dd413-5eac-4da3-ba06-0917a412956d-kube-api-access-2glgl\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.494341 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c7dd413-5eac-4da3-ba06-0917a412956d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.573687 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.910015 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45d6b817-38ed-4b91-b375-d0b358eaab0b","Type":"ContainerStarted","Data":"5b0568b71f1079904e5885c4871dc1ae627d3f3fd814304a01936499a7d663d5"} Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.911519 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" event={"ID":"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8","Type":"ContainerStarted","Data":"ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4"} Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.912319 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.932473 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" podStartSLOduration=2.932455529 podStartE2EDuration="2.932455529s" podCreationTimestamp="2026-01-12 13:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:25:12.926596842 +0000 UTC m=+1111.970815531" watchObservedRunningTime="2026-01-12 13:25:12.932455529 +0000 UTC m=+1111.976674219" Jan 12 13:25:12 crc kubenswrapper[4580]: I0112 13:25:12.949443 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 12 13:25:13 crc kubenswrapper[4580]: W0112 13:25:13.029019 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c7dd413_5eac_4da3_ba06_0917a412956d.slice/crio-cb68a0a57869d709f0abcc9b0a2d487dc1d7e607f4f10c4a11c1f8572517c39b WatchSource:0}: Error finding container cb68a0a57869d709f0abcc9b0a2d487dc1d7e607f4f10c4a11c1f8572517c39b: Status 404 returned error can't find the container with id cb68a0a57869d709f0abcc9b0a2d487dc1d7e607f4f10c4a11c1f8572517c39b Jan 12 13:25:13 crc kubenswrapper[4580]: I0112 13:25:13.305856 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee1d970-f295-46eb-91eb-70a45cb019c1" path="/var/lib/kubelet/pods/3ee1d970-f295-46eb-91eb-70a45cb019c1/volumes" Jan 12 13:25:13 crc kubenswrapper[4580]: I0112 13:25:13.929956 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45d6b817-38ed-4b91-b375-d0b358eaab0b","Type":"ContainerStarted","Data":"a0cd15c4edf70604c2cab0c8f19d2cd2e416ed09dc63d615f9fa44c177acecdd"} Jan 12 13:25:13 crc kubenswrapper[4580]: I0112 13:25:13.933883 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c7dd413-5eac-4da3-ba06-0917a412956d","Type":"ContainerStarted","Data":"cb68a0a57869d709f0abcc9b0a2d487dc1d7e607f4f10c4a11c1f8572517c39b"} Jan 12 13:25:14 crc kubenswrapper[4580]: I0112 13:25:14.941588 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c7dd413-5eac-4da3-ba06-0917a412956d","Type":"ContainerStarted","Data":"87037314b986a3f44d1f64b980199fb714db9af10ac71a86e73a6f51612e432f"} Jan 12 13:25:20 crc kubenswrapper[4580]: I0112 13:25:20.975252 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.022486 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-8cjg6"] Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.023051 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" podUID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" containerName="dnsmasq-dns" containerID="cri-o://9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740" gracePeriod=10 Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.118550 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-xlkwf"] Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.120319 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.142761 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-xlkwf"] Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.212587 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-config\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.212638 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.212711 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.212729 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.212788 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gph\" (UniqueName: \"kubernetes.io/projected/5eee677a-4caa-4107-a64f-cee518dfed89-kube-api-access-q2gph\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.212860 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.212893 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.319614 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gph\" (UniqueName: \"kubernetes.io/projected/5eee677a-4caa-4107-a64f-cee518dfed89-kube-api-access-q2gph\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.319819 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.319868 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.319928 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-config\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.319951 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.320015 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.320055 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.321009 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.321904 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-config\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.322498 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-ovsdbserver-sb\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.326839 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-dns-svc\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.326977 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-ovsdbserver-nb\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.327567 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eee677a-4caa-4107-a64f-cee518dfed89-dns-swift-storage-0\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.360347 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gph\" (UniqueName: \"kubernetes.io/projected/5eee677a-4caa-4107-a64f-cee518dfed89-kube-api-access-q2gph\") pod \"dnsmasq-dns-d7b79b84c-xlkwf\" (UID: \"5eee677a-4caa-4107-a64f-cee518dfed89\") " pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.474020 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.560009 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.726795 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-sb\") pod \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.726903 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-swift-storage-0\") pod \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.727117 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-config\") pod \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.727199 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-svc\") pod \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.727235 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxmw5\" (UniqueName: \"kubernetes.io/projected/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-kube-api-access-rxmw5\") pod \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.727253 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-nb\") pod \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\" (UID: \"4aeeb6e1-3e0f-4a47-a133-ccfca235c552\") " Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.741228 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-kube-api-access-rxmw5" (OuterVolumeSpecName: "kube-api-access-rxmw5") pod "4aeeb6e1-3e0f-4a47-a133-ccfca235c552" (UID: "4aeeb6e1-3e0f-4a47-a133-ccfca235c552"). InnerVolumeSpecName "kube-api-access-rxmw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.764716 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4aeeb6e1-3e0f-4a47-a133-ccfca235c552" (UID: "4aeeb6e1-3e0f-4a47-a133-ccfca235c552"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.764834 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4aeeb6e1-3e0f-4a47-a133-ccfca235c552" (UID: "4aeeb6e1-3e0f-4a47-a133-ccfca235c552"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.771647 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-config" (OuterVolumeSpecName: "config") pod "4aeeb6e1-3e0f-4a47-a133-ccfca235c552" (UID: "4aeeb6e1-3e0f-4a47-a133-ccfca235c552"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.775339 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4aeeb6e1-3e0f-4a47-a133-ccfca235c552" (UID: "4aeeb6e1-3e0f-4a47-a133-ccfca235c552"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.778659 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4aeeb6e1-3e0f-4a47-a133-ccfca235c552" (UID: "4aeeb6e1-3e0f-4a47-a133-ccfca235c552"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.830960 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.830987 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.830997 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.831008 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxmw5\" (UniqueName: \"kubernetes.io/projected/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-kube-api-access-rxmw5\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.831019 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.831028 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aeeb6e1-3e0f-4a47-a133-ccfca235c552-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.871482 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7b79b84c-xlkwf"] Jan 12 13:25:21 crc kubenswrapper[4580]: W0112 13:25:21.871754 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eee677a_4caa_4107_a64f_cee518dfed89.slice/crio-5db453af9d2b65451a2b19782c445789a16b0364caa969576c96f0888cd6bc3d WatchSource:0}: Error finding container 5db453af9d2b65451a2b19782c445789a16b0364caa969576c96f0888cd6bc3d: Status 404 returned error can't find the container with id 5db453af9d2b65451a2b19782c445789a16b0364caa969576c96f0888cd6bc3d Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.995593 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" event={"ID":"5eee677a-4caa-4107-a64f-cee518dfed89","Type":"ContainerStarted","Data":"5db453af9d2b65451a2b19782c445789a16b0364caa969576c96f0888cd6bc3d"} Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.998058 4580 generic.go:334] "Generic (PLEG): container finished" podID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" containerID="9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740" exitCode=0 Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.998173 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" event={"ID":"4aeeb6e1-3e0f-4a47-a133-ccfca235c552","Type":"ContainerDied","Data":"9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740"} Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.998267 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" event={"ID":"4aeeb6e1-3e0f-4a47-a133-ccfca235c552","Type":"ContainerDied","Data":"341d2eebd19e54cd89cba85e1c7f1915b631b8f684fc296937e8e510b4f583e2"} Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.998277 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-8cjg6" Jan 12 13:25:21 crc kubenswrapper[4580]: I0112 13:25:21.998348 4580 scope.go:117] "RemoveContainer" containerID="9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740" Jan 12 13:25:22 crc kubenswrapper[4580]: I0112 13:25:22.024263 4580 scope.go:117] "RemoveContainer" containerID="37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f" Jan 12 13:25:22 crc kubenswrapper[4580]: I0112 13:25:22.032776 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-8cjg6"] Jan 12 13:25:22 crc kubenswrapper[4580]: I0112 13:25:22.040418 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-8cjg6"] Jan 12 13:25:22 crc kubenswrapper[4580]: I0112 13:25:22.057009 4580 scope.go:117] "RemoveContainer" containerID="9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740" Jan 12 13:25:22 crc kubenswrapper[4580]: E0112 13:25:22.057536 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740\": container with ID starting with 9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740 not found: ID does not exist" containerID="9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740" Jan 12 13:25:22 crc kubenswrapper[4580]: I0112 13:25:22.057578 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740"} err="failed to get container status \"9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740\": rpc error: code = NotFound desc = could not find container \"9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740\": container with ID starting with 9abd965d66b669c0dac9e43e6446eaf3e2aacfc368d88dda2e559275cd117740 not found: ID does not exist" Jan 12 13:25:22 crc kubenswrapper[4580]: I0112 13:25:22.057606 4580 scope.go:117] "RemoveContainer" containerID="37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f" Jan 12 13:25:22 crc kubenswrapper[4580]: E0112 13:25:22.058050 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f\": container with ID starting with 37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f not found: ID does not exist" containerID="37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f" Jan 12 13:25:22 crc kubenswrapper[4580]: I0112 13:25:22.058078 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f"} err="failed to get container status \"37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f\": rpc error: code = NotFound desc = could not find container \"37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f\": container with ID starting with 37960b5f7a5a2578165f5386ffefdbf2eeec6a4afe95a836d83a0705f3249c3f not found: ID does not exist" Jan 12 13:25:23 crc kubenswrapper[4580]: I0112 13:25:23.008544 4580 generic.go:334] "Generic (PLEG): container finished" podID="5eee677a-4caa-4107-a64f-cee518dfed89" containerID="7ed155ae620c89fc3518ffee7524611950e528cc3c6b4b36883cf71ef83052b4" exitCode=0 Jan 12 13:25:23 crc kubenswrapper[4580]: I0112 13:25:23.008589 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" event={"ID":"5eee677a-4caa-4107-a64f-cee518dfed89","Type":"ContainerDied","Data":"7ed155ae620c89fc3518ffee7524611950e528cc3c6b4b36883cf71ef83052b4"} Jan 12 13:25:23 crc kubenswrapper[4580]: I0112 13:25:23.290341 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" path="/var/lib/kubelet/pods/4aeeb6e1-3e0f-4a47-a133-ccfca235c552/volumes" Jan 12 13:25:24 crc kubenswrapper[4580]: I0112 13:25:24.019431 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" event={"ID":"5eee677a-4caa-4107-a64f-cee518dfed89","Type":"ContainerStarted","Data":"4b88574bf2424e9234911b224fb2eddf2a6c15751edc781299726b8ae5dfe215"} Jan 12 13:25:24 crc kubenswrapper[4580]: I0112 13:25:24.019863 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:24 crc kubenswrapper[4580]: I0112 13:25:24.040010 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" podStartSLOduration=3.039994302 podStartE2EDuration="3.039994302s" podCreationTimestamp="2026-01-12 13:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:25:24.034204555 +0000 UTC m=+1123.078423246" watchObservedRunningTime="2026-01-12 13:25:24.039994302 +0000 UTC m=+1123.084212993" Jan 12 13:25:31 crc kubenswrapper[4580]: I0112 13:25:31.475939 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d7b79b84c-xlkwf" Jan 12 13:25:31 crc kubenswrapper[4580]: I0112 13:25:31.532248 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-hcxfj"] Jan 12 13:25:31 crc kubenswrapper[4580]: I0112 13:25:31.532440 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" podUID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" containerName="dnsmasq-dns" containerID="cri-o://ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4" gracePeriod=10 Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.916003 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.924888 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-svc\") pod \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.924945 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4kc2\" (UniqueName: \"kubernetes.io/projected/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-kube-api-access-d4kc2\") pod \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.924975 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-swift-storage-0\") pod \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.925019 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-openstack-edpm-ipam\") pod \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.925035 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-sb\") pod \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.925158 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-nb\") pod \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.925574 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-config\") pod \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\" (UID: \"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8\") " Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.940361 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-kube-api-access-d4kc2" (OuterVolumeSpecName: "kube-api-access-d4kc2") pod "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" (UID: "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8"). InnerVolumeSpecName "kube-api-access-d4kc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.963137 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" (UID: "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.974544 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" (UID: "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.975314 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" (UID: "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.975582 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" (UID: "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.979876 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" (UID: "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:31.980358 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-config" (OuterVolumeSpecName: "config") pod "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" (UID: "3794d2c5-2a69-4c1f-955f-a9677bdb5fb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.027840 4580 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.027864 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.027873 4580 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.027884 4580 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.027893 4580 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.027901 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4kc2\" (UniqueName: \"kubernetes.io/projected/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-kube-api-access-d4kc2\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.027911 4580 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.086443 4580 generic.go:334] "Generic (PLEG): container finished" podID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" containerID="ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4" exitCode=0 Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.086480 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" event={"ID":"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8","Type":"ContainerDied","Data":"ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4"} Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.086502 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.086514 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8595b94875-hcxfj" event={"ID":"3794d2c5-2a69-4c1f-955f-a9677bdb5fb8","Type":"ContainerDied","Data":"667da56b4d8e02592ed14e56e177073bc91ca55bbd43b7d8594331dce5af7055"} Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.086533 4580 scope.go:117] "RemoveContainer" containerID="ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.113258 4580 scope.go:117] "RemoveContainer" containerID="61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.120852 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-hcxfj"] Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.127186 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8595b94875-hcxfj"] Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.131866 4580 scope.go:117] "RemoveContainer" containerID="ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4" Jan 12 13:25:32 crc kubenswrapper[4580]: E0112 13:25:32.132202 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4\": container with ID starting with ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4 not found: ID does not exist" containerID="ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.132241 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4"} err="failed to get container status \"ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4\": rpc error: code = NotFound desc = could not find container \"ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4\": container with ID starting with ebd2afa94e937e705e0b4edb648cfd4bb18f8124caa5fc047fbbb7dd5c6423d4 not found: ID does not exist" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.132266 4580 scope.go:117] "RemoveContainer" containerID="61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704" Jan 12 13:25:32 crc kubenswrapper[4580]: E0112 13:25:32.132595 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704\": container with ID starting with 61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704 not found: ID does not exist" containerID="61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704" Jan 12 13:25:32 crc kubenswrapper[4580]: I0112 13:25:32.132619 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704"} err="failed to get container status \"61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704\": rpc error: code = NotFound desc = could not find container \"61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704\": container with ID starting with 61966a8fd81f3fb5a5a673e37c556903566067ad9cd76ea231e364a354e4e704 not found: ID does not exist" Jan 12 13:25:33 crc kubenswrapper[4580]: I0112 13:25:33.291702 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" path="/var/lib/kubelet/pods/3794d2c5-2a69-4c1f-955f-a9677bdb5fb8/volumes" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.710009 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8"] Jan 12 13:25:44 crc kubenswrapper[4580]: E0112 13:25:44.710750 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" containerName="init" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.710763 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" containerName="init" Jan 12 13:25:44 crc kubenswrapper[4580]: E0112 13:25:44.710777 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" containerName="dnsmasq-dns" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.710782 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" containerName="dnsmasq-dns" Jan 12 13:25:44 crc kubenswrapper[4580]: E0112 13:25:44.710793 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" containerName="dnsmasq-dns" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.710799 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" containerName="dnsmasq-dns" Jan 12 13:25:44 crc kubenswrapper[4580]: E0112 13:25:44.710830 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" containerName="init" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.710836 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" containerName="init" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.711003 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aeeb6e1-3e0f-4a47-a133-ccfca235c552" containerName="dnsmasq-dns" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.711022 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="3794d2c5-2a69-4c1f-955f-a9677bdb5fb8" containerName="dnsmasq-dns" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.719786 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.726486 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.726556 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8"] Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.726675 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.726710 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.726830 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.844239 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.844376 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.844420 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.844488 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cggbk\" (UniqueName: \"kubernetes.io/projected/16ecca65-5485-450e-8a2b-06f5e3558fc6-kube-api-access-cggbk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.946132 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.946182 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.946298 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cggbk\" (UniqueName: \"kubernetes.io/projected/16ecca65-5485-450e-8a2b-06f5e3558fc6-kube-api-access-cggbk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.946347 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.951356 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.951645 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.952035 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:44 crc kubenswrapper[4580]: I0112 13:25:44.960204 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cggbk\" (UniqueName: \"kubernetes.io/projected/16ecca65-5485-450e-8a2b-06f5e3558fc6-kube-api-access-cggbk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:45 crc kubenswrapper[4580]: I0112 13:25:45.041380 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:25:45 crc kubenswrapper[4580]: I0112 13:25:45.194129 4580 generic.go:334] "Generic (PLEG): container finished" podID="45d6b817-38ed-4b91-b375-d0b358eaab0b" containerID="a0cd15c4edf70604c2cab0c8f19d2cd2e416ed09dc63d615f9fa44c177acecdd" exitCode=0 Jan 12 13:25:45 crc kubenswrapper[4580]: I0112 13:25:45.194183 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45d6b817-38ed-4b91-b375-d0b358eaab0b","Type":"ContainerDied","Data":"a0cd15c4edf70604c2cab0c8f19d2cd2e416ed09dc63d615f9fa44c177acecdd"} Jan 12 13:25:45 crc kubenswrapper[4580]: I0112 13:25:45.489562 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8"] Jan 12 13:25:45 crc kubenswrapper[4580]: W0112 13:25:45.493269 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16ecca65_5485_450e_8a2b_06f5e3558fc6.slice/crio-b2c0bf6af4928fb387d875fa9e4fdcb6fec407b1d804e93f7737a1eaa8ab8ce3 WatchSource:0}: Error finding container b2c0bf6af4928fb387d875fa9e4fdcb6fec407b1d804e93f7737a1eaa8ab8ce3: Status 404 returned error can't find the container with id b2c0bf6af4928fb387d875fa9e4fdcb6fec407b1d804e93f7737a1eaa8ab8ce3 Jan 12 13:25:45 crc kubenswrapper[4580]: I0112 13:25:45.495795 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:25:46 crc kubenswrapper[4580]: I0112 13:25:46.203986 4580 generic.go:334] "Generic (PLEG): container finished" podID="4c7dd413-5eac-4da3-ba06-0917a412956d" containerID="87037314b986a3f44d1f64b980199fb714db9af10ac71a86e73a6f51612e432f" exitCode=0 Jan 12 13:25:46 crc kubenswrapper[4580]: I0112 13:25:46.204053 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c7dd413-5eac-4da3-ba06-0917a412956d","Type":"ContainerDied","Data":"87037314b986a3f44d1f64b980199fb714db9af10ac71a86e73a6f51612e432f"} Jan 12 13:25:46 crc kubenswrapper[4580]: I0112 13:25:46.207959 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"45d6b817-38ed-4b91-b375-d0b358eaab0b","Type":"ContainerStarted","Data":"311f56bff0f39e5d27caa26229d8b3d05d79a34ead9eec6d7df0d481147774da"} Jan 12 13:25:46 crc kubenswrapper[4580]: I0112 13:25:46.208305 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 12 13:25:46 crc kubenswrapper[4580]: I0112 13:25:46.210417 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" event={"ID":"16ecca65-5485-450e-8a2b-06f5e3558fc6","Type":"ContainerStarted","Data":"b2c0bf6af4928fb387d875fa9e4fdcb6fec407b1d804e93f7737a1eaa8ab8ce3"} Jan 12 13:25:46 crc kubenswrapper[4580]: I0112 13:25:46.240793 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.240778648 podStartE2EDuration="35.240778648s" podCreationTimestamp="2026-01-12 13:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:25:46.239937897 +0000 UTC m=+1145.284156587" watchObservedRunningTime="2026-01-12 13:25:46.240778648 +0000 UTC m=+1145.284997338" Jan 12 13:25:47 crc kubenswrapper[4580]: I0112 13:25:47.223318 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c7dd413-5eac-4da3-ba06-0917a412956d","Type":"ContainerStarted","Data":"c938dcbdc46ce391cedbee95c70c2181c08662a345e77ebbc2151871f7991dbf"} Jan 12 13:25:47 crc kubenswrapper[4580]: I0112 13:25:47.224418 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:25:47 crc kubenswrapper[4580]: I0112 13:25:47.245798 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.245782999 podStartE2EDuration="35.245782999s" podCreationTimestamp="2026-01-12 13:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:25:47.24142325 +0000 UTC m=+1146.285641940" watchObservedRunningTime="2026-01-12 13:25:47.245782999 +0000 UTC m=+1146.290001689" Jan 12 13:25:55 crc kubenswrapper[4580]: I0112 13:25:55.308043 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" event={"ID":"16ecca65-5485-450e-8a2b-06f5e3558fc6","Type":"ContainerStarted","Data":"d44d6ea1ffe3be78ad23307b3d05d3c30a34c4f74143b696ea9858653dd58ced"} Jan 12 13:25:55 crc kubenswrapper[4580]: I0112 13:25:55.328397 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" podStartSLOduration=2.209099745 podStartE2EDuration="11.32838479s" podCreationTimestamp="2026-01-12 13:25:44 +0000 UTC" firstStartedPulling="2026-01-12 13:25:45.495521562 +0000 UTC m=+1144.539740252" lastFinishedPulling="2026-01-12 13:25:54.614806608 +0000 UTC m=+1153.659025297" observedRunningTime="2026-01-12 13:25:55.321533526 +0000 UTC m=+1154.365752217" watchObservedRunningTime="2026-01-12 13:25:55.32838479 +0000 UTC m=+1154.372603480" Jan 12 13:26:01 crc kubenswrapper[4580]: I0112 13:26:01.567758 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 12 13:26:02 crc kubenswrapper[4580]: I0112 13:26:02.577311 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 12 13:26:06 crc kubenswrapper[4580]: I0112 13:26:06.393969 4580 generic.go:334] "Generic (PLEG): container finished" podID="16ecca65-5485-450e-8a2b-06f5e3558fc6" containerID="d44d6ea1ffe3be78ad23307b3d05d3c30a34c4f74143b696ea9858653dd58ced" exitCode=0 Jan 12 13:26:06 crc kubenswrapper[4580]: I0112 13:26:06.394059 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" event={"ID":"16ecca65-5485-450e-8a2b-06f5e3558fc6","Type":"ContainerDied","Data":"d44d6ea1ffe3be78ad23307b3d05d3c30a34c4f74143b696ea9858653dd58ced"} Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.752686 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.847043 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-inventory\") pod \"16ecca65-5485-450e-8a2b-06f5e3558fc6\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.847205 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cggbk\" (UniqueName: \"kubernetes.io/projected/16ecca65-5485-450e-8a2b-06f5e3558fc6-kube-api-access-cggbk\") pod \"16ecca65-5485-450e-8a2b-06f5e3558fc6\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.847384 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-repo-setup-combined-ca-bundle\") pod \"16ecca65-5485-450e-8a2b-06f5e3558fc6\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.847481 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-ssh-key-openstack-edpm-ipam\") pod \"16ecca65-5485-450e-8a2b-06f5e3558fc6\" (UID: \"16ecca65-5485-450e-8a2b-06f5e3558fc6\") " Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.853543 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "16ecca65-5485-450e-8a2b-06f5e3558fc6" (UID: "16ecca65-5485-450e-8a2b-06f5e3558fc6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.853609 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ecca65-5485-450e-8a2b-06f5e3558fc6-kube-api-access-cggbk" (OuterVolumeSpecName: "kube-api-access-cggbk") pod "16ecca65-5485-450e-8a2b-06f5e3558fc6" (UID: "16ecca65-5485-450e-8a2b-06f5e3558fc6"). InnerVolumeSpecName "kube-api-access-cggbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.871721 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-inventory" (OuterVolumeSpecName: "inventory") pod "16ecca65-5485-450e-8a2b-06f5e3558fc6" (UID: "16ecca65-5485-450e-8a2b-06f5e3558fc6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.874214 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "16ecca65-5485-450e-8a2b-06f5e3558fc6" (UID: "16ecca65-5485-450e-8a2b-06f5e3558fc6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.949392 4580 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.949548 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.949562 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ecca65-5485-450e-8a2b-06f5e3558fc6-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:26:07 crc kubenswrapper[4580]: I0112 13:26:07.949573 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cggbk\" (UniqueName: \"kubernetes.io/projected/16ecca65-5485-450e-8a2b-06f5e3558fc6-kube-api-access-cggbk\") on node \"crc\" DevicePath \"\"" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.412415 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" event={"ID":"16ecca65-5485-450e-8a2b-06f5e3558fc6","Type":"ContainerDied","Data":"b2c0bf6af4928fb387d875fa9e4fdcb6fec407b1d804e93f7737a1eaa8ab8ce3"} Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.412868 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2c0bf6af4928fb387d875fa9e4fdcb6fec407b1d804e93f7737a1eaa8ab8ce3" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.412470 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.487279 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q"] Jan 12 13:26:08 crc kubenswrapper[4580]: E0112 13:26:08.487901 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ecca65-5485-450e-8a2b-06f5e3558fc6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.487932 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ecca65-5485-450e-8a2b-06f5e3558fc6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.488151 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ecca65-5485-450e-8a2b-06f5e3558fc6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.488876 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.492918 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.493976 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.494236 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.494837 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q"] Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.495233 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.563257 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjm6\" (UniqueName: \"kubernetes.io/projected/007f6af6-a125-443f-a2ff-1b1322aefca5-kube-api-access-8mjm6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.563364 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.563653 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.665958 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mjm6\" (UniqueName: \"kubernetes.io/projected/007f6af6-a125-443f-a2ff-1b1322aefca5-kube-api-access-8mjm6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.666008 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.666166 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.671731 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.671984 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.679409 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mjm6\" (UniqueName: \"kubernetes.io/projected/007f6af6-a125-443f-a2ff-1b1322aefca5-kube-api-access-8mjm6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xkg7q\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:08 crc kubenswrapper[4580]: I0112 13:26:08.803068 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:09 crc kubenswrapper[4580]: I0112 13:26:09.261807 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q"] Jan 12 13:26:09 crc kubenswrapper[4580]: I0112 13:26:09.426048 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" event={"ID":"007f6af6-a125-443f-a2ff-1b1322aefca5","Type":"ContainerStarted","Data":"636f6f45ea3f7d47c8f4e30a59c23b89bd3faaee4bb19703c74036f357a13a13"} Jan 12 13:26:10 crc kubenswrapper[4580]: I0112 13:26:10.434637 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" event={"ID":"007f6af6-a125-443f-a2ff-1b1322aefca5","Type":"ContainerStarted","Data":"982d91216b4dfdca591d0cbedd82772866855e77797b92dec1f43690f2af1809"} Jan 12 13:26:10 crc kubenswrapper[4580]: I0112 13:26:10.452826 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" podStartSLOduration=1.8176237130000001 podStartE2EDuration="2.452809415s" podCreationTimestamp="2026-01-12 13:26:08 +0000 UTC" firstStartedPulling="2026-01-12 13:26:09.268325495 +0000 UTC m=+1168.312544185" lastFinishedPulling="2026-01-12 13:26:09.903511198 +0000 UTC m=+1168.947729887" observedRunningTime="2026-01-12 13:26:10.445647387 +0000 UTC m=+1169.489866076" watchObservedRunningTime="2026-01-12 13:26:10.452809415 +0000 UTC m=+1169.497028105" Jan 12 13:26:12 crc kubenswrapper[4580]: I0112 13:26:12.452810 4580 generic.go:334] "Generic (PLEG): container finished" podID="007f6af6-a125-443f-a2ff-1b1322aefca5" containerID="982d91216b4dfdca591d0cbedd82772866855e77797b92dec1f43690f2af1809" exitCode=0 Jan 12 13:26:12 crc kubenswrapper[4580]: I0112 13:26:12.452913 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" event={"ID":"007f6af6-a125-443f-a2ff-1b1322aefca5","Type":"ContainerDied","Data":"982d91216b4dfdca591d0cbedd82772866855e77797b92dec1f43690f2af1809"} Jan 12 13:26:13 crc kubenswrapper[4580]: I0112 13:26:13.775568 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:13 crc kubenswrapper[4580]: I0112 13:26:13.969796 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-ssh-key-openstack-edpm-ipam\") pod \"007f6af6-a125-443f-a2ff-1b1322aefca5\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " Jan 12 13:26:13 crc kubenswrapper[4580]: I0112 13:26:13.969872 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-inventory\") pod \"007f6af6-a125-443f-a2ff-1b1322aefca5\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " Jan 12 13:26:13 crc kubenswrapper[4580]: I0112 13:26:13.970301 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mjm6\" (UniqueName: \"kubernetes.io/projected/007f6af6-a125-443f-a2ff-1b1322aefca5-kube-api-access-8mjm6\") pod \"007f6af6-a125-443f-a2ff-1b1322aefca5\" (UID: \"007f6af6-a125-443f-a2ff-1b1322aefca5\") " Jan 12 13:26:13 crc kubenswrapper[4580]: I0112 13:26:13.975488 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/007f6af6-a125-443f-a2ff-1b1322aefca5-kube-api-access-8mjm6" (OuterVolumeSpecName: "kube-api-access-8mjm6") pod "007f6af6-a125-443f-a2ff-1b1322aefca5" (UID: "007f6af6-a125-443f-a2ff-1b1322aefca5"). InnerVolumeSpecName "kube-api-access-8mjm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:26:13 crc kubenswrapper[4580]: I0112 13:26:13.994815 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "007f6af6-a125-443f-a2ff-1b1322aefca5" (UID: "007f6af6-a125-443f-a2ff-1b1322aefca5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:26:13 crc kubenswrapper[4580]: I0112 13:26:13.995490 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-inventory" (OuterVolumeSpecName: "inventory") pod "007f6af6-a125-443f-a2ff-1b1322aefca5" (UID: "007f6af6-a125-443f-a2ff-1b1322aefca5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.073168 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mjm6\" (UniqueName: \"kubernetes.io/projected/007f6af6-a125-443f-a2ff-1b1322aefca5-kube-api-access-8mjm6\") on node \"crc\" DevicePath \"\"" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.073203 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.073217 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/007f6af6-a125-443f-a2ff-1b1322aefca5-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.471227 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" event={"ID":"007f6af6-a125-443f-a2ff-1b1322aefca5","Type":"ContainerDied","Data":"636f6f45ea3f7d47c8f4e30a59c23b89bd3faaee4bb19703c74036f357a13a13"} Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.471508 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="636f6f45ea3f7d47c8f4e30a59c23b89bd3faaee4bb19703c74036f357a13a13" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.471276 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xkg7q" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.523973 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j"] Jan 12 13:26:14 crc kubenswrapper[4580]: E0112 13:26:14.524449 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="007f6af6-a125-443f-a2ff-1b1322aefca5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.524469 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="007f6af6-a125-443f-a2ff-1b1322aefca5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.524684 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="007f6af6-a125-443f-a2ff-1b1322aefca5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.525341 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.526848 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.526863 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.529760 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.530049 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.530380 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j"] Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.681891 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.682569 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.682741 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8fz9\" (UniqueName: \"kubernetes.io/projected/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-kube-api-access-d8fz9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.682827 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.784701 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.785436 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8fz9\" (UniqueName: \"kubernetes.io/projected/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-kube-api-access-d8fz9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.785597 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.785642 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.788601 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.788639 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.789239 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.798214 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8fz9\" (UniqueName: \"kubernetes.io/projected/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-kube-api-access-d8fz9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:14 crc kubenswrapper[4580]: I0112 13:26:14.838055 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:26:15 crc kubenswrapper[4580]: I0112 13:26:15.270552 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j"] Jan 12 13:26:15 crc kubenswrapper[4580]: I0112 13:26:15.480176 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" event={"ID":"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35","Type":"ContainerStarted","Data":"2106dd30ed2c681662a71cb0e0aee8ceebc8278c40a9ec37bf94852d1dae6499"} Jan 12 13:26:16 crc kubenswrapper[4580]: I0112 13:26:16.489196 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" event={"ID":"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35","Type":"ContainerStarted","Data":"3b055350713553e68a6b83d59f9240d30437a7bc88b3e810ca8a7462697f5b94"} Jan 12 13:26:16 crc kubenswrapper[4580]: I0112 13:26:16.502601 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" podStartSLOduration=1.904091462 podStartE2EDuration="2.502585854s" podCreationTimestamp="2026-01-12 13:26:14 +0000 UTC" firstStartedPulling="2026-01-12 13:26:15.276935881 +0000 UTC m=+1174.321154571" lastFinishedPulling="2026-01-12 13:26:15.875430274 +0000 UTC m=+1174.919648963" observedRunningTime="2026-01-12 13:26:16.500391178 +0000 UTC m=+1175.544609867" watchObservedRunningTime="2026-01-12 13:26:16.502585854 +0000 UTC m=+1175.546804545" Jan 12 13:26:46 crc kubenswrapper[4580]: I0112 13:26:46.949793 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:26:46 crc kubenswrapper[4580]: I0112 13:26:46.950340 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:26:54 crc kubenswrapper[4580]: I0112 13:26:54.624437 4580 scope.go:117] "RemoveContainer" containerID="4b3acc025cd3bd8394f140c85823c905b06c0aaac0167239877ee597b24fabbb" Jan 12 13:27:16 crc kubenswrapper[4580]: I0112 13:27:16.949790 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:27:16 crc kubenswrapper[4580]: I0112 13:27:16.950314 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:27:46 crc kubenswrapper[4580]: I0112 13:27:46.949123 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:27:46 crc kubenswrapper[4580]: I0112 13:27:46.949511 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:27:46 crc kubenswrapper[4580]: I0112 13:27:46.949553 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:27:46 crc kubenswrapper[4580]: I0112 13:27:46.950263 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0804525f520200773e09490adee4c80bb3967d1eb56f3e87d1a77a748cd87b06"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:27:46 crc kubenswrapper[4580]: I0112 13:27:46.950313 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://0804525f520200773e09490adee4c80bb3967d1eb56f3e87d1a77a748cd87b06" gracePeriod=600 Jan 12 13:27:47 crc kubenswrapper[4580]: I0112 13:27:47.171213 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="0804525f520200773e09490adee4c80bb3967d1eb56f3e87d1a77a748cd87b06" exitCode=0 Jan 12 13:27:47 crc kubenswrapper[4580]: I0112 13:27:47.171277 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"0804525f520200773e09490adee4c80bb3967d1eb56f3e87d1a77a748cd87b06"} Jan 12 13:27:47 crc kubenswrapper[4580]: I0112 13:27:47.171384 4580 scope.go:117] "RemoveContainer" containerID="62195f179f376ea4916eddf796027fa5a80271672d3171f47fa9237f1c01b2a4" Jan 12 13:27:48 crc kubenswrapper[4580]: I0112 13:27:48.185090 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"e7c14f11ee163df37acee7c0a47ee3b8e21b57ddc7953c9ff079f4afda394d2b"} Jan 12 13:27:54 crc kubenswrapper[4580]: I0112 13:27:54.674972 4580 scope.go:117] "RemoveContainer" containerID="84b2273d9e2a668302d2b30f68341de15e75b7c74fba0cc4eb8e482c80b94806" Jan 12 13:27:54 crc kubenswrapper[4580]: I0112 13:27:54.691642 4580 scope.go:117] "RemoveContainer" containerID="90f5149edfeb6f80aefd3387cd72cce809098bb0aab1765107a48736927a6a28" Jan 12 13:27:54 crc kubenswrapper[4580]: I0112 13:27:54.708936 4580 scope.go:117] "RemoveContainer" containerID="4a528aa57c41d5d1c370a68e7e5a32fffe0542a02a3c390991ea6f4179e34318" Jan 12 13:27:54 crc kubenswrapper[4580]: I0112 13:27:54.748760 4580 scope.go:117] "RemoveContainer" containerID="d4c16316be0419bbe254426d82941cfa3523c3d37dfc7cae073ace47bf297952" Jan 12 13:27:54 crc kubenswrapper[4580]: I0112 13:27:54.775493 4580 scope.go:117] "RemoveContainer" containerID="74c1e9a109071b596d005186baf53e456b93891ceb51948650dc0ba9c9fdd577" Jan 12 13:29:20 crc kubenswrapper[4580]: I0112 13:29:20.812732 4580 generic.go:334] "Generic (PLEG): container finished" podID="2a4039fd-f1bf-4fdd-881a-192b4b4c8a35" containerID="3b055350713553e68a6b83d59f9240d30437a7bc88b3e810ca8a7462697f5b94" exitCode=0 Jan 12 13:29:20 crc kubenswrapper[4580]: I0112 13:29:20.812808 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" event={"ID":"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35","Type":"ContainerDied","Data":"3b055350713553e68a6b83d59f9240d30437a7bc88b3e810ca8a7462697f5b94"} Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.136308 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.155152 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-ssh-key-openstack-edpm-ipam\") pod \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.155292 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-bootstrap-combined-ca-bundle\") pod \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.155378 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8fz9\" (UniqueName: \"kubernetes.io/projected/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-kube-api-access-d8fz9\") pod \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.155501 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-inventory\") pod \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\" (UID: \"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35\") " Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.159321 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-kube-api-access-d8fz9" (OuterVolumeSpecName: "kube-api-access-d8fz9") pod "2a4039fd-f1bf-4fdd-881a-192b4b4c8a35" (UID: "2a4039fd-f1bf-4fdd-881a-192b4b4c8a35"). InnerVolumeSpecName "kube-api-access-d8fz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.159659 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2a4039fd-f1bf-4fdd-881a-192b4b4c8a35" (UID: "2a4039fd-f1bf-4fdd-881a-192b4b4c8a35"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.176553 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2a4039fd-f1bf-4fdd-881a-192b4b4c8a35" (UID: "2a4039fd-f1bf-4fdd-881a-192b4b4c8a35"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.176892 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-inventory" (OuterVolumeSpecName: "inventory") pod "2a4039fd-f1bf-4fdd-881a-192b4b4c8a35" (UID: "2a4039fd-f1bf-4fdd-881a-192b4b4c8a35"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.257646 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8fz9\" (UniqueName: \"kubernetes.io/projected/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-kube-api-access-d8fz9\") on node \"crc\" DevicePath \"\"" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.257673 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.257684 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.257692 4580 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4039fd-f1bf-4fdd-881a-192b4b4c8a35-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.827726 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" event={"ID":"2a4039fd-f1bf-4fdd-881a-192b4b4c8a35","Type":"ContainerDied","Data":"2106dd30ed2c681662a71cb0e0aee8ceebc8278c40a9ec37bf94852d1dae6499"} Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.827956 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2106dd30ed2c681662a71cb0e0aee8ceebc8278c40a9ec37bf94852d1dae6499" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.827970 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.885428 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps"] Jan 12 13:29:22 crc kubenswrapper[4580]: E0112 13:29:22.885810 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4039fd-f1bf-4fdd-881a-192b4b4c8a35" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.885828 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4039fd-f1bf-4fdd-881a-192b4b4c8a35" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.885997 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4039fd-f1bf-4fdd-881a-192b4b4c8a35" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.886598 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.890630 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.890560 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.890893 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.890902 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.895499 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps"] Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.968213 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrc4p\" (UniqueName: \"kubernetes.io/projected/3dce5050-a090-4782-a068-efafd359455a-kube-api-access-lrc4p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.968324 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:22 crc kubenswrapper[4580]: I0112 13:29:22.968361 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.070594 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrc4p\" (UniqueName: \"kubernetes.io/projected/3dce5050-a090-4782-a068-efafd359455a-kube-api-access-lrc4p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.070767 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.070834 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.074532 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.074550 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.084032 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrc4p\" (UniqueName: \"kubernetes.io/projected/3dce5050-a090-4782-a068-efafd359455a-kube-api-access-lrc4p\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8krps\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.200896 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.626466 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps"] Jan 12 13:29:23 crc kubenswrapper[4580]: I0112 13:29:23.835386 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" event={"ID":"3dce5050-a090-4782-a068-efafd359455a","Type":"ContainerStarted","Data":"7e92c0e43c3b8d140299906be6988106e1f3702a4a760a84651392394281662b"} Jan 12 13:29:24 crc kubenswrapper[4580]: I0112 13:29:24.843479 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" event={"ID":"3dce5050-a090-4782-a068-efafd359455a","Type":"ContainerStarted","Data":"2e0ccdf091b36ea43ae6df1d3d7065954457b01745e663ec688447471bfbddd4"} Jan 12 13:29:24 crc kubenswrapper[4580]: I0112 13:29:24.855255 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" podStartSLOduration=2.2676015 podStartE2EDuration="2.855242287s" podCreationTimestamp="2026-01-12 13:29:22 +0000 UTC" firstStartedPulling="2026-01-12 13:29:23.628365329 +0000 UTC m=+1362.672584019" lastFinishedPulling="2026-01-12 13:29:24.216006117 +0000 UTC m=+1363.260224806" observedRunningTime="2026-01-12 13:29:24.853431193 +0000 UTC m=+1363.897649883" watchObservedRunningTime="2026-01-12 13:29:24.855242287 +0000 UTC m=+1363.899460977" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.131410 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58"] Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.133042 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.135003 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.135072 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.139566 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58"] Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.203968 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b9f196d-5c47-4afc-b144-881c5fe48eaa-config-volume\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.204122 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t94b\" (UniqueName: \"kubernetes.io/projected/0b9f196d-5c47-4afc-b144-881c5fe48eaa-kube-api-access-5t94b\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.204153 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b9f196d-5c47-4afc-b144-881c5fe48eaa-secret-volume\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.306170 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b9f196d-5c47-4afc-b144-881c5fe48eaa-config-volume\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.306458 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t94b\" (UniqueName: \"kubernetes.io/projected/0b9f196d-5c47-4afc-b144-881c5fe48eaa-kube-api-access-5t94b\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.306551 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b9f196d-5c47-4afc-b144-881c5fe48eaa-secret-volume\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.306904 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b9f196d-5c47-4afc-b144-881c5fe48eaa-config-volume\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.313392 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b9f196d-5c47-4afc-b144-881c5fe48eaa-secret-volume\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.321580 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t94b\" (UniqueName: \"kubernetes.io/projected/0b9f196d-5c47-4afc-b144-881c5fe48eaa-kube-api-access-5t94b\") pod \"collect-profiles-29470410-bqc58\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.449403 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:00 crc kubenswrapper[4580]: I0112 13:30:00.841296 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58"] Jan 12 13:30:01 crc kubenswrapper[4580]: I0112 13:30:01.081298 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" event={"ID":"0b9f196d-5c47-4afc-b144-881c5fe48eaa","Type":"ContainerStarted","Data":"72ec2a447bdf5296916af72814f0d51aee83dbb92f86a93fd31b99d82e871c44"} Jan 12 13:30:01 crc kubenswrapper[4580]: I0112 13:30:01.081570 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" event={"ID":"0b9f196d-5c47-4afc-b144-881c5fe48eaa","Type":"ContainerStarted","Data":"dfe212f29198211d5095068061963e17e01aba71f9814c6ab7b506b0023fb809"} Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.089613 4580 generic.go:334] "Generic (PLEG): container finished" podID="0b9f196d-5c47-4afc-b144-881c5fe48eaa" containerID="72ec2a447bdf5296916af72814f0d51aee83dbb92f86a93fd31b99d82e871c44" exitCode=0 Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.089715 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" event={"ID":"0b9f196d-5c47-4afc-b144-881c5fe48eaa","Type":"ContainerDied","Data":"72ec2a447bdf5296916af72814f0d51aee83dbb92f86a93fd31b99d82e871c44"} Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.357420 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.549768 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t94b\" (UniqueName: \"kubernetes.io/projected/0b9f196d-5c47-4afc-b144-881c5fe48eaa-kube-api-access-5t94b\") pod \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.549919 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b9f196d-5c47-4afc-b144-881c5fe48eaa-secret-volume\") pod \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.550321 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b9f196d-5c47-4afc-b144-881c5fe48eaa-config-volume\") pod \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\" (UID: \"0b9f196d-5c47-4afc-b144-881c5fe48eaa\") " Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.550937 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b9f196d-5c47-4afc-b144-881c5fe48eaa-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b9f196d-5c47-4afc-b144-881c5fe48eaa" (UID: "0b9f196d-5c47-4afc-b144-881c5fe48eaa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.556084 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9f196d-5c47-4afc-b144-881c5fe48eaa-kube-api-access-5t94b" (OuterVolumeSpecName: "kube-api-access-5t94b") pod "0b9f196d-5c47-4afc-b144-881c5fe48eaa" (UID: "0b9f196d-5c47-4afc-b144-881c5fe48eaa"). InnerVolumeSpecName "kube-api-access-5t94b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.556404 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b9f196d-5c47-4afc-b144-881c5fe48eaa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b9f196d-5c47-4afc-b144-881c5fe48eaa" (UID: "0b9f196d-5c47-4afc-b144-881c5fe48eaa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.652484 4580 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b9f196d-5c47-4afc-b144-881c5fe48eaa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.652514 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t94b\" (UniqueName: \"kubernetes.io/projected/0b9f196d-5c47-4afc-b144-881c5fe48eaa-kube-api-access-5t94b\") on node \"crc\" DevicePath \"\"" Jan 12 13:30:02 crc kubenswrapper[4580]: I0112 13:30:02.652528 4580 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b9f196d-5c47-4afc-b144-881c5fe48eaa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:30:03 crc kubenswrapper[4580]: I0112 13:30:03.101616 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" event={"ID":"0b9f196d-5c47-4afc-b144-881c5fe48eaa","Type":"ContainerDied","Data":"dfe212f29198211d5095068061963e17e01aba71f9814c6ab7b506b0023fb809"} Jan 12 13:30:03 crc kubenswrapper[4580]: I0112 13:30:03.101659 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe212f29198211d5095068061963e17e01aba71f9814c6ab7b506b0023fb809" Jan 12 13:30:03 crc kubenswrapper[4580]: I0112 13:30:03.101687 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470410-bqc58" Jan 12 13:30:16 crc kubenswrapper[4580]: I0112 13:30:16.949689 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:30:16 crc kubenswrapper[4580]: I0112 13:30:16.950184 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:30:46 crc kubenswrapper[4580]: I0112 13:30:46.948868 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:30:46 crc kubenswrapper[4580]: I0112 13:30:46.949288 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.030064 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-59d8f"] Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.054226 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-79cmh"] Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.062382 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-59d8f"] Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.068020 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cd54-account-create-update-4xn8h"] Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.076570 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-79cmh"] Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.084618 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cd54-account-create-update-4xn8h"] Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.290385 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d41309-dddf-49bc-9512-44a8ffa9de38" path="/var/lib/kubelet/pods/06d41309-dddf-49bc-9512-44a8ffa9de38/volumes" Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.291053 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d042099-c4b5-4bcb-a1bb-3d1765c30e40" path="/var/lib/kubelet/pods/2d042099-c4b5-4bcb-a1bb-3d1765c30e40/volumes" Jan 12 13:30:47 crc kubenswrapper[4580]: I0112 13:30:47.291561 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ee3c58-c9e2-4be9-83d6-12c6d69801f9" path="/var/lib/kubelet/pods/58ee3c58-c9e2-4be9-83d6-12c6d69801f9/volumes" Jan 12 13:30:48 crc kubenswrapper[4580]: I0112 13:30:48.021217 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mcg5w"] Jan 12 13:30:48 crc kubenswrapper[4580]: I0112 13:30:48.028997 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4284-account-create-update-gs8hr"] Jan 12 13:30:48 crc kubenswrapper[4580]: I0112 13:30:48.034937 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4a4b-account-create-update-fmvkt"] Jan 12 13:30:48 crc kubenswrapper[4580]: I0112 13:30:48.040150 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4284-account-create-update-gs8hr"] Jan 12 13:30:48 crc kubenswrapper[4580]: I0112 13:30:48.045171 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4a4b-account-create-update-fmvkt"] Jan 12 13:30:48 crc kubenswrapper[4580]: I0112 13:30:48.050173 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mcg5w"] Jan 12 13:30:49 crc kubenswrapper[4580]: I0112 13:30:49.288980 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bb983d-7347-4bae-a852-d70d0ff091f4" path="/var/lib/kubelet/pods/54bb983d-7347-4bae-a852-d70d0ff091f4/volumes" Jan 12 13:30:49 crc kubenswrapper[4580]: I0112 13:30:49.289769 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90c2604d-2aab-4817-b1a3-f1ce2921fd7c" path="/var/lib/kubelet/pods/90c2604d-2aab-4817-b1a3-f1ce2921fd7c/volumes" Jan 12 13:30:49 crc kubenswrapper[4580]: I0112 13:30:49.290459 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ecccf2-f4b7-4606-afb9-b50486e02a0b" path="/var/lib/kubelet/pods/b0ecccf2-f4b7-4606-afb9-b50486e02a0b/volumes" Jan 12 13:30:54 crc kubenswrapper[4580]: I0112 13:30:54.892423 4580 scope.go:117] "RemoveContainer" containerID="207c701509200b9c85ea0ca2ffd4db8c0d71a5cf7b00c55cb4b907e136a1abcf" Jan 12 13:30:54 crc kubenswrapper[4580]: I0112 13:30:54.921412 4580 scope.go:117] "RemoveContainer" containerID="1bd5a121d72b5ffa6a25064675ff9340cc9481f91501bc34faaa7f3b2a5c401d" Jan 12 13:30:54 crc kubenswrapper[4580]: I0112 13:30:54.947958 4580 scope.go:117] "RemoveContainer" containerID="b42eb21de98096d3a22670005405ed03a64631745c03491e9f601ec593f77c69" Jan 12 13:30:54 crc kubenswrapper[4580]: I0112 13:30:54.973980 4580 scope.go:117] "RemoveContainer" containerID="3919f075e75d2f7e909386a5d84e06b14c879361613b21c989b655f76148e499" Jan 12 13:30:55 crc kubenswrapper[4580]: I0112 13:30:55.008244 4580 scope.go:117] "RemoveContainer" containerID="59e7e97714490c488dea268552d281ea855bb81ba1ca35de6faae31ed62d25ff" Jan 12 13:30:55 crc kubenswrapper[4580]: I0112 13:30:55.039626 4580 scope.go:117] "RemoveContainer" containerID="e2f9ea3c2e46a9662b63d40aed632d3bcd606808feecc35ba320037099b24eb7" Jan 12 13:30:58 crc kubenswrapper[4580]: I0112 13:30:58.473596 4580 generic.go:334] "Generic (PLEG): container finished" podID="3dce5050-a090-4782-a068-efafd359455a" containerID="2e0ccdf091b36ea43ae6df1d3d7065954457b01745e663ec688447471bfbddd4" exitCode=0 Jan 12 13:30:58 crc kubenswrapper[4580]: I0112 13:30:58.473685 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" event={"ID":"3dce5050-a090-4782-a068-efafd359455a","Type":"ContainerDied","Data":"2e0ccdf091b36ea43ae6df1d3d7065954457b01745e663ec688447471bfbddd4"} Jan 12 13:30:59 crc kubenswrapper[4580]: I0112 13:30:59.023942 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hhffz"] Jan 12 13:30:59 crc kubenswrapper[4580]: I0112 13:30:59.029957 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hhffz"] Jan 12 13:30:59 crc kubenswrapper[4580]: I0112 13:30:59.289456 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb87416-55e3-414c-a7d2-8248c85883ef" path="/var/lib/kubelet/pods/eeb87416-55e3-414c-a7d2-8248c85883ef/volumes" Jan 12 13:30:59 crc kubenswrapper[4580]: I0112 13:30:59.811568 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:30:59 crc kubenswrapper[4580]: I0112 13:30:59.980687 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-inventory\") pod \"3dce5050-a090-4782-a068-efafd359455a\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " Jan 12 13:30:59 crc kubenswrapper[4580]: I0112 13:30:59.980745 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrc4p\" (UniqueName: \"kubernetes.io/projected/3dce5050-a090-4782-a068-efafd359455a-kube-api-access-lrc4p\") pod \"3dce5050-a090-4782-a068-efafd359455a\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " Jan 12 13:30:59 crc kubenswrapper[4580]: I0112 13:30:59.980852 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-ssh-key-openstack-edpm-ipam\") pod \"3dce5050-a090-4782-a068-efafd359455a\" (UID: \"3dce5050-a090-4782-a068-efafd359455a\") " Jan 12 13:30:59 crc kubenswrapper[4580]: I0112 13:30:59.987721 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dce5050-a090-4782-a068-efafd359455a-kube-api-access-lrc4p" (OuterVolumeSpecName: "kube-api-access-lrc4p") pod "3dce5050-a090-4782-a068-efafd359455a" (UID: "3dce5050-a090-4782-a068-efafd359455a"). InnerVolumeSpecName "kube-api-access-lrc4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.004046 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3dce5050-a090-4782-a068-efafd359455a" (UID: "3dce5050-a090-4782-a068-efafd359455a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.005490 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-inventory" (OuterVolumeSpecName: "inventory") pod "3dce5050-a090-4782-a068-efafd359455a" (UID: "3dce5050-a090-4782-a068-efafd359455a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.084433 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.084471 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrc4p\" (UniqueName: \"kubernetes.io/projected/3dce5050-a090-4782-a068-efafd359455a-kube-api-access-lrc4p\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.084484 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dce5050-a090-4782-a068-efafd359455a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.494198 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" event={"ID":"3dce5050-a090-4782-a068-efafd359455a","Type":"ContainerDied","Data":"7e92c0e43c3b8d140299906be6988106e1f3702a4a760a84651392394281662b"} Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.494262 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e92c0e43c3b8d140299906be6988106e1f3702a4a760a84651392394281662b" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.494363 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8krps" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.563424 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr"] Jan 12 13:31:00 crc kubenswrapper[4580]: E0112 13:31:00.563771 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dce5050-a090-4782-a068-efafd359455a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.563788 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dce5050-a090-4782-a068-efafd359455a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 12 13:31:00 crc kubenswrapper[4580]: E0112 13:31:00.563819 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9f196d-5c47-4afc-b144-881c5fe48eaa" containerName="collect-profiles" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.563825 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9f196d-5c47-4afc-b144-881c5fe48eaa" containerName="collect-profiles" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.563964 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dce5050-a090-4782-a068-efafd359455a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.563982 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9f196d-5c47-4afc-b144-881c5fe48eaa" containerName="collect-profiles" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.564537 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.566583 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.566770 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.566906 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.567015 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.571627 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr"] Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.592981 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.593124 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnwbr\" (UniqueName: \"kubernetes.io/projected/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-kube-api-access-mnwbr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.593162 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.694138 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnwbr\" (UniqueName: \"kubernetes.io/projected/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-kube-api-access-mnwbr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.694196 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.694245 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.700054 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.701005 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.710060 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnwbr\" (UniqueName: \"kubernetes.io/projected/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-kube-api-access-mnwbr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:00 crc kubenswrapper[4580]: I0112 13:31:00.880016 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:01 crc kubenswrapper[4580]: I0112 13:31:01.320531 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr"] Jan 12 13:31:01 crc kubenswrapper[4580]: I0112 13:31:01.321604 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:31:01 crc kubenswrapper[4580]: I0112 13:31:01.503517 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" event={"ID":"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9","Type":"ContainerStarted","Data":"966bca05206abc2da14b0d929cdbc4bc008b5835195217ea89240f70f2f477d9"} Jan 12 13:31:03 crc kubenswrapper[4580]: I0112 13:31:03.523117 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" event={"ID":"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9","Type":"ContainerStarted","Data":"087f106302b789ca90e73c9d5768682df8c36e9b83d26e6e5ed1f25a92e7727b"} Jan 12 13:31:03 crc kubenswrapper[4580]: I0112 13:31:03.542832 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" podStartSLOduration=2.314358785 podStartE2EDuration="3.542813779s" podCreationTimestamp="2026-01-12 13:31:00 +0000 UTC" firstStartedPulling="2026-01-12 13:31:01.321341636 +0000 UTC m=+1460.365560327" lastFinishedPulling="2026-01-12 13:31:02.54979663 +0000 UTC m=+1461.594015321" observedRunningTime="2026-01-12 13:31:03.537214285 +0000 UTC m=+1462.581432975" watchObservedRunningTime="2026-01-12 13:31:03.542813779 +0000 UTC m=+1462.587032469" Jan 12 13:31:05 crc kubenswrapper[4580]: I0112 13:31:05.025544 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wxmgx"] Jan 12 13:31:05 crc kubenswrapper[4580]: I0112 13:31:05.031584 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wxmgx"] Jan 12 13:31:05 crc kubenswrapper[4580]: I0112 13:31:05.292167 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b2da09-96cd-45b9-b3bd-720a8e5d354b" path="/var/lib/kubelet/pods/d1b2da09-96cd-45b9-b3bd-720a8e5d354b/volumes" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.033712 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c9e5-account-create-update-vj7p5"] Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.040628 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c9e5-account-create-update-vj7p5"] Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.290082 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c7d555-fb69-4ebf-a44c-4132a0b4f3ee" path="/var/lib/kubelet/pods/23c7d555-fb69-4ebf-a44c-4132a0b4f3ee/volumes" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.435773 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p7ff2"] Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.438207 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.443583 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7ff2"] Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.448224 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-utilities\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.448270 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-catalog-content\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.448341 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzln\" (UniqueName: \"kubernetes.io/projected/2df5077b-a44c-4d26-bfa3-8da7917be387-kube-api-access-pnzln\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.550022 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-utilities\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.550087 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-catalog-content\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.550241 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnzln\" (UniqueName: \"kubernetes.io/projected/2df5077b-a44c-4d26-bfa3-8da7917be387-kube-api-access-pnzln\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.550531 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-utilities\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.550574 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-catalog-content\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.568255 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnzln\" (UniqueName: \"kubernetes.io/projected/2df5077b-a44c-4d26-bfa3-8da7917be387-kube-api-access-pnzln\") pod \"redhat-marketplace-p7ff2\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:15 crc kubenswrapper[4580]: I0112 13:31:15.755056 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.027982 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-r4xf5"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.041158 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-pgk6t"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.047761 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-bsrgf"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.052403 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-r4xf5"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.057260 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-pgk6t"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.062075 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8712-account-create-update-6c2pj"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.071041 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b1c4-account-create-update-sxf66"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.077339 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b1c4-account-create-update-sxf66"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.082428 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8712-account-create-update-6c2pj"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.087584 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-bsrgf"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.169057 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7ff2"] Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.628716 4580 generic.go:334] "Generic (PLEG): container finished" podID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerID="a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a" exitCode=0 Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.628772 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7ff2" event={"ID":"2df5077b-a44c-4d26-bfa3-8da7917be387","Type":"ContainerDied","Data":"a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a"} Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.629255 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7ff2" event={"ID":"2df5077b-a44c-4d26-bfa3-8da7917be387","Type":"ContainerStarted","Data":"865c1219a201b4beebddec712b024cacc331789cb4d2d340bb5251d84e6ee6c0"} Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.949559 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.949615 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.949659 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.950176 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7c14f11ee163df37acee7c0a47ee3b8e21b57ddc7953c9ff079f4afda394d2b"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:31:16 crc kubenswrapper[4580]: I0112 13:31:16.950233 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://e7c14f11ee163df37acee7c0a47ee3b8e21b57ddc7953c9ff079f4afda394d2b" gracePeriod=600 Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.292175 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c601173-b4e4-482a-a6e7-c5d7ff359a05" path="/var/lib/kubelet/pods/1c601173-b4e4-482a-a6e7-c5d7ff359a05/volumes" Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.292977 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d42ba5a-3d41-4d57-9c38-c9a115ee139d" path="/var/lib/kubelet/pods/5d42ba5a-3d41-4d57-9c38-c9a115ee139d/volumes" Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.293524 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac9c176-3792-4485-b000-a8cfc4c53f21" path="/var/lib/kubelet/pods/7ac9c176-3792-4485-b000-a8cfc4c53f21/volumes" Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.294043 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aece380-6756-4ba8-8628-1cded9cd4005" path="/var/lib/kubelet/pods/7aece380-6756-4ba8-8628-1cded9cd4005/volumes" Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.295235 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899c8c6d-d46b-4694-97c7-35a2f3e9ff45" path="/var/lib/kubelet/pods/899c8c6d-d46b-4694-97c7-35a2f3e9ff45/volumes" Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.641174 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="e7c14f11ee163df37acee7c0a47ee3b8e21b57ddc7953c9ff079f4afda394d2b" exitCode=0 Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.641270 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"e7c14f11ee163df37acee7c0a47ee3b8e21b57ddc7953c9ff079f4afda394d2b"} Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.641453 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1"} Jan 12 13:31:17 crc kubenswrapper[4580]: I0112 13:31:17.641496 4580 scope.go:117] "RemoveContainer" containerID="0804525f520200773e09490adee4c80bb3967d1eb56f3e87d1a77a748cd87b06" Jan 12 13:31:18 crc kubenswrapper[4580]: I0112 13:31:18.650898 4580 generic.go:334] "Generic (PLEG): container finished" podID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerID="ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f" exitCode=0 Jan 12 13:31:18 crc kubenswrapper[4580]: I0112 13:31:18.651025 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7ff2" event={"ID":"2df5077b-a44c-4d26-bfa3-8da7917be387","Type":"ContainerDied","Data":"ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f"} Jan 12 13:31:20 crc kubenswrapper[4580]: I0112 13:31:20.674583 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7ff2" event={"ID":"2df5077b-a44c-4d26-bfa3-8da7917be387","Type":"ContainerStarted","Data":"b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6"} Jan 12 13:31:20 crc kubenswrapper[4580]: I0112 13:31:20.691261 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p7ff2" podStartSLOduration=2.683476094 podStartE2EDuration="5.691244766s" podCreationTimestamp="2026-01-12 13:31:15 +0000 UTC" firstStartedPulling="2026-01-12 13:31:16.631857945 +0000 UTC m=+1475.676076635" lastFinishedPulling="2026-01-12 13:31:19.639626617 +0000 UTC m=+1478.683845307" observedRunningTime="2026-01-12 13:31:20.689701666 +0000 UTC m=+1479.733920356" watchObservedRunningTime="2026-01-12 13:31:20.691244766 +0000 UTC m=+1479.735463456" Jan 12 13:31:21 crc kubenswrapper[4580]: I0112 13:31:21.026372 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x4v84"] Jan 12 13:31:21 crc kubenswrapper[4580]: I0112 13:31:21.032703 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x4v84"] Jan 12 13:31:21 crc kubenswrapper[4580]: I0112 13:31:21.292209 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0609dc3a-35c3-4be4-8625-aad80295f0ea" path="/var/lib/kubelet/pods/0609dc3a-35c3-4be4-8625-aad80295f0ea/volumes" Jan 12 13:31:25 crc kubenswrapper[4580]: I0112 13:31:25.756001 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:25 crc kubenswrapper[4580]: I0112 13:31:25.756270 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:25 crc kubenswrapper[4580]: I0112 13:31:25.796547 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:26 crc kubenswrapper[4580]: I0112 13:31:26.762254 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:26 crc kubenswrapper[4580]: I0112 13:31:26.804365 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7ff2"] Jan 12 13:31:28 crc kubenswrapper[4580]: I0112 13:31:28.743528 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p7ff2" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerName="registry-server" containerID="cri-o://b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6" gracePeriod=2 Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.136402 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.301127 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-catalog-content\") pod \"2df5077b-a44c-4d26-bfa3-8da7917be387\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.301533 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-utilities\") pod \"2df5077b-a44c-4d26-bfa3-8da7917be387\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.301676 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnzln\" (UniqueName: \"kubernetes.io/projected/2df5077b-a44c-4d26-bfa3-8da7917be387-kube-api-access-pnzln\") pod \"2df5077b-a44c-4d26-bfa3-8da7917be387\" (UID: \"2df5077b-a44c-4d26-bfa3-8da7917be387\") " Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.302092 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-utilities" (OuterVolumeSpecName: "utilities") pod "2df5077b-a44c-4d26-bfa3-8da7917be387" (UID: "2df5077b-a44c-4d26-bfa3-8da7917be387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.302537 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.310392 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df5077b-a44c-4d26-bfa3-8da7917be387-kube-api-access-pnzln" (OuterVolumeSpecName: "kube-api-access-pnzln") pod "2df5077b-a44c-4d26-bfa3-8da7917be387" (UID: "2df5077b-a44c-4d26-bfa3-8da7917be387"). InnerVolumeSpecName "kube-api-access-pnzln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.319791 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2df5077b-a44c-4d26-bfa3-8da7917be387" (UID: "2df5077b-a44c-4d26-bfa3-8da7917be387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.406371 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnzln\" (UniqueName: \"kubernetes.io/projected/2df5077b-a44c-4d26-bfa3-8da7917be387-kube-api-access-pnzln\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.406403 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2df5077b-a44c-4d26-bfa3-8da7917be387-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.753366 4580 generic.go:334] "Generic (PLEG): container finished" podID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerID="b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6" exitCode=0 Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.753416 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7ff2" event={"ID":"2df5077b-a44c-4d26-bfa3-8da7917be387","Type":"ContainerDied","Data":"b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6"} Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.753452 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7ff2" event={"ID":"2df5077b-a44c-4d26-bfa3-8da7917be387","Type":"ContainerDied","Data":"865c1219a201b4beebddec712b024cacc331789cb4d2d340bb5251d84e6ee6c0"} Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.753469 4580 scope.go:117] "RemoveContainer" containerID="b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.753963 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7ff2" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.776850 4580 scope.go:117] "RemoveContainer" containerID="ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.783779 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7ff2"] Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.789345 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7ff2"] Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.809325 4580 scope.go:117] "RemoveContainer" containerID="a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.830424 4580 scope.go:117] "RemoveContainer" containerID="b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6" Jan 12 13:31:29 crc kubenswrapper[4580]: E0112 13:31:29.830717 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6\": container with ID starting with b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6 not found: ID does not exist" containerID="b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.830749 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6"} err="failed to get container status \"b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6\": rpc error: code = NotFound desc = could not find container \"b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6\": container with ID starting with b3c4f747ee8674a0f16ec3ba3706e03bcb997faf116a48e68056930b6eddc6c6 not found: ID does not exist" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.830774 4580 scope.go:117] "RemoveContainer" containerID="ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f" Jan 12 13:31:29 crc kubenswrapper[4580]: E0112 13:31:29.830987 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f\": container with ID starting with ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f not found: ID does not exist" containerID="ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.831010 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f"} err="failed to get container status \"ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f\": rpc error: code = NotFound desc = could not find container \"ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f\": container with ID starting with ac1e1a6ab89d40be3ec2b9f03d470b542329e6250970cd0381060dc74eceda0f not found: ID does not exist" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.831024 4580 scope.go:117] "RemoveContainer" containerID="a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a" Jan 12 13:31:29 crc kubenswrapper[4580]: E0112 13:31:29.831233 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a\": container with ID starting with a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a not found: ID does not exist" containerID="a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a" Jan 12 13:31:29 crc kubenswrapper[4580]: I0112 13:31:29.831256 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a"} err="failed to get container status \"a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a\": rpc error: code = NotFound desc = could not find container \"a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a\": container with ID starting with a48d9c96694b208dbc6dda2e6d9d55a6801e3ed166c763b821d737f9a195889a not found: ID does not exist" Jan 12 13:31:31 crc kubenswrapper[4580]: I0112 13:31:31.290079 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" path="/var/lib/kubelet/pods/2df5077b-a44c-4d26-bfa3-8da7917be387/volumes" Jan 12 13:31:47 crc kubenswrapper[4580]: I0112 13:31:47.034443 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gdxqz"] Jan 12 13:31:47 crc kubenswrapper[4580]: I0112 13:31:47.041801 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gdxqz"] Jan 12 13:31:47 crc kubenswrapper[4580]: I0112 13:31:47.304219 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0ed855-adbc-497b-9bc5-92330edbb8c8" path="/var/lib/kubelet/pods/eb0ed855-adbc-497b-9bc5-92330edbb8c8/volumes" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.396219 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lsz48"] Jan 12 13:31:49 crc kubenswrapper[4580]: E0112 13:31:49.397004 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerName="extract-utilities" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.397020 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerName="extract-utilities" Jan 12 13:31:49 crc kubenswrapper[4580]: E0112 13:31:49.397047 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerName="extract-content" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.397055 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerName="extract-content" Jan 12 13:31:49 crc kubenswrapper[4580]: E0112 13:31:49.397067 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerName="registry-server" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.397074 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerName="registry-server" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.397277 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df5077b-a44c-4d26-bfa3-8da7917be387" containerName="registry-server" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.398659 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.405675 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsz48"] Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.470293 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hllj\" (UniqueName: \"kubernetes.io/projected/e6980755-f603-4a2b-b82e-45018cbd55d0-kube-api-access-5hllj\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.470407 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-utilities\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.470441 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-catalog-content\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.572280 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hllj\" (UniqueName: \"kubernetes.io/projected/e6980755-f603-4a2b-b82e-45018cbd55d0-kube-api-access-5hllj\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.572377 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-utilities\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.572410 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-catalog-content\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.572891 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-catalog-content\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.572978 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-utilities\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.590966 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hllj\" (UniqueName: \"kubernetes.io/projected/e6980755-f603-4a2b-b82e-45018cbd55d0-kube-api-access-5hllj\") pod \"certified-operators-lsz48\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:49 crc kubenswrapper[4580]: I0112 13:31:49.717522 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:50 crc kubenswrapper[4580]: I0112 13:31:50.177628 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsz48"] Jan 12 13:31:50 crc kubenswrapper[4580]: I0112 13:31:50.914282 4580 generic.go:334] "Generic (PLEG): container finished" podID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerID="a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2" exitCode=0 Jan 12 13:31:50 crc kubenswrapper[4580]: I0112 13:31:50.914333 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsz48" event={"ID":"e6980755-f603-4a2b-b82e-45018cbd55d0","Type":"ContainerDied","Data":"a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2"} Jan 12 13:31:50 crc kubenswrapper[4580]: I0112 13:31:50.915310 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsz48" event={"ID":"e6980755-f603-4a2b-b82e-45018cbd55d0","Type":"ContainerStarted","Data":"28619c414c9afd1dc0f857c035271a7f649dc8b89238a001fb89b1942e037221"} Jan 12 13:31:51 crc kubenswrapper[4580]: I0112 13:31:51.927835 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsz48" event={"ID":"e6980755-f603-4a2b-b82e-45018cbd55d0","Type":"ContainerStarted","Data":"1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a"} Jan 12 13:31:52 crc kubenswrapper[4580]: I0112 13:31:52.943341 4580 generic.go:334] "Generic (PLEG): container finished" podID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerID="1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a" exitCode=0 Jan 12 13:31:52 crc kubenswrapper[4580]: I0112 13:31:52.943437 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsz48" event={"ID":"e6980755-f603-4a2b-b82e-45018cbd55d0","Type":"ContainerDied","Data":"1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a"} Jan 12 13:31:53 crc kubenswrapper[4580]: I0112 13:31:53.027472 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-s9pd9"] Jan 12 13:31:53 crc kubenswrapper[4580]: I0112 13:31:53.032828 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-s9pd9"] Jan 12 13:31:53 crc kubenswrapper[4580]: I0112 13:31:53.292256 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd41e34-6733-4a77-b99b-3ab0895b124a" path="/var/lib/kubelet/pods/edd41e34-6733-4a77-b99b-3ab0895b124a/volumes" Jan 12 13:31:53 crc kubenswrapper[4580]: I0112 13:31:53.953645 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsz48" event={"ID":"e6980755-f603-4a2b-b82e-45018cbd55d0","Type":"ContainerStarted","Data":"78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb"} Jan 12 13:31:53 crc kubenswrapper[4580]: I0112 13:31:53.969373 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lsz48" podStartSLOduration=2.258016118 podStartE2EDuration="4.969357608s" podCreationTimestamp="2026-01-12 13:31:49 +0000 UTC" firstStartedPulling="2026-01-12 13:31:50.916588016 +0000 UTC m=+1509.960806706" lastFinishedPulling="2026-01-12 13:31:53.627929506 +0000 UTC m=+1512.672148196" observedRunningTime="2026-01-12 13:31:53.968012349 +0000 UTC m=+1513.012231039" watchObservedRunningTime="2026-01-12 13:31:53.969357608 +0000 UTC m=+1513.013576298" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.155262 4580 scope.go:117] "RemoveContainer" containerID="1e1787d4224e9f482311b2fc6bbea9890c23a5d5d6662897289c0e8cb4806aad" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.181917 4580 scope.go:117] "RemoveContainer" containerID="ebe319b0186fc31e0386d03451272307737e022576009ee882bd299a18729bc1" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.212622 4580 scope.go:117] "RemoveContainer" containerID="32d7509faeb05df8322a013109b7939eb9eb224e6dc8e66eb443205e7d91533b" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.241950 4580 scope.go:117] "RemoveContainer" containerID="4c6f96f05ab13c382074308c3e73b4c335cb41f3bf56a815ed468b366edd7be7" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.266962 4580 scope.go:117] "RemoveContainer" containerID="21e58b652a63beb9a4dd53f0cfc7d09efecf216a7ca5e5f3ac4c4d55b42999e2" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.319691 4580 scope.go:117] "RemoveContainer" containerID="cfe69420301424bb8260648e61122d02674c853997aaf692f028749778fb265c" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.349442 4580 scope.go:117] "RemoveContainer" containerID="543d08e2b210315c80a3e52ef0e43b8fc07cc6c5b729abece9dcd7133630a163" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.368528 4580 scope.go:117] "RemoveContainer" containerID="766a0e0bcaa4a21deed755b346bffab06e1b15b9f8f13b5b1b9af81e66d8e506" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.407910 4580 scope.go:117] "RemoveContainer" containerID="c4185759bec5c4dbc9ef8e7c55449106a9d766e56cfd897eb244d56801c306be" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.433540 4580 scope.go:117] "RemoveContainer" containerID="8b16d6c23e87d6c3ea7dbdd47884edb1b77b68c62764a1e1fbc4f4987a6b88a1" Jan 12 13:31:55 crc kubenswrapper[4580]: I0112 13:31:55.451169 4580 scope.go:117] "RemoveContainer" containerID="f5c48aef4732f12bdd251a0162274ccf94b53ccb98c9c0456744ea06cd1a1dd9" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.783858 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-khfwl"] Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.785853 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.805153 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khfwl"] Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.838301 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-utilities\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.838345 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62vbm\" (UniqueName: \"kubernetes.io/projected/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-kube-api-access-62vbm\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.838439 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-catalog-content\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.940961 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62vbm\" (UniqueName: \"kubernetes.io/projected/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-kube-api-access-62vbm\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.941227 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-catalog-content\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.941491 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-utilities\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.941890 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-catalog-content\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.941924 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-utilities\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:56 crc kubenswrapper[4580]: I0112 13:31:56.962512 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62vbm\" (UniqueName: \"kubernetes.io/projected/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-kube-api-access-62vbm\") pod \"redhat-operators-khfwl\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:57 crc kubenswrapper[4580]: I0112 13:31:57.105493 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:31:57 crc kubenswrapper[4580]: I0112 13:31:57.523815 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khfwl"] Jan 12 13:31:57 crc kubenswrapper[4580]: W0112 13:31:57.528450 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d38623c_2fbb_4d12_8ec0_a96a5c42fdc1.slice/crio-bc00a36be65097224e03daf28e48395b663a38eb68096aae5fb1cd24b503c73f WatchSource:0}: Error finding container bc00a36be65097224e03daf28e48395b663a38eb68096aae5fb1cd24b503c73f: Status 404 returned error can't find the container with id bc00a36be65097224e03daf28e48395b663a38eb68096aae5fb1cd24b503c73f Jan 12 13:31:57 crc kubenswrapper[4580]: I0112 13:31:57.987420 4580 generic.go:334] "Generic (PLEG): container finished" podID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerID="688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347" exitCode=0 Jan 12 13:31:57 crc kubenswrapper[4580]: I0112 13:31:57.987493 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khfwl" event={"ID":"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1","Type":"ContainerDied","Data":"688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347"} Jan 12 13:31:57 crc kubenswrapper[4580]: I0112 13:31:57.987578 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khfwl" event={"ID":"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1","Type":"ContainerStarted","Data":"bc00a36be65097224e03daf28e48395b663a38eb68096aae5fb1cd24b503c73f"} Jan 12 13:31:57 crc kubenswrapper[4580]: I0112 13:31:57.989463 4580 generic.go:334] "Generic (PLEG): container finished" podID="b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9" containerID="087f106302b789ca90e73c9d5768682df8c36e9b83d26e6e5ed1f25a92e7727b" exitCode=0 Jan 12 13:31:57 crc kubenswrapper[4580]: I0112 13:31:57.989513 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" event={"ID":"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9","Type":"ContainerDied","Data":"087f106302b789ca90e73c9d5768682df8c36e9b83d26e6e5ed1f25a92e7727b"} Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.362419 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.490873 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-ssh-key-openstack-edpm-ipam\") pod \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.491009 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnwbr\" (UniqueName: \"kubernetes.io/projected/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-kube-api-access-mnwbr\") pod \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.491354 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-inventory\") pod \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\" (UID: \"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9\") " Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.497535 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-kube-api-access-mnwbr" (OuterVolumeSpecName: "kube-api-access-mnwbr") pod "b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9" (UID: "b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9"). InnerVolumeSpecName "kube-api-access-mnwbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.515536 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9" (UID: "b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.517640 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-inventory" (OuterVolumeSpecName: "inventory") pod "b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9" (UID: "b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.596052 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.596459 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnwbr\" (UniqueName: \"kubernetes.io/projected/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-kube-api-access-mnwbr\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.596471 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.718624 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.719052 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:31:59 crc kubenswrapper[4580]: I0112 13:31:59.760950 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.008027 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khfwl" event={"ID":"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1","Type":"ContainerStarted","Data":"aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d"} Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.010663 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" event={"ID":"b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9","Type":"ContainerDied","Data":"966bca05206abc2da14b0d929cdbc4bc008b5835195217ea89240f70f2f477d9"} Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.010696 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="966bca05206abc2da14b0d929cdbc4bc008b5835195217ea89240f70f2f477d9" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.010717 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.057346 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.099601 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7"] Jan 12 13:32:00 crc kubenswrapper[4580]: E0112 13:32:00.100495 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.100536 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.100874 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.102118 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.105685 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.105779 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.106145 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.106593 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.108927 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.108971 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.109058 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzhf\" (UniqueName: \"kubernetes.io/projected/553cda0e-1691-4748-8a47-d34d8600ea2e-kube-api-access-8jzhf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.111649 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7"] Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.211152 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.211816 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.211927 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzhf\" (UniqueName: \"kubernetes.io/projected/553cda0e-1691-4748-8a47-d34d8600ea2e-kube-api-access-8jzhf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.215961 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.216539 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.226627 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzhf\" (UniqueName: \"kubernetes.io/projected/553cda0e-1691-4748-8a47-d34d8600ea2e-kube-api-access-8jzhf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.426805 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:00 crc kubenswrapper[4580]: I0112 13:32:00.922705 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7"] Jan 12 13:32:01 crc kubenswrapper[4580]: I0112 13:32:01.020458 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" event={"ID":"553cda0e-1691-4748-8a47-d34d8600ea2e","Type":"ContainerStarted","Data":"5260a48c2e8b2939902e62739b6faf22f9e6600ae32e1f0cb7315fa3a356307b"} Jan 12 13:32:01 crc kubenswrapper[4580]: I0112 13:32:01.781316 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsz48"] Jan 12 13:32:02 crc kubenswrapper[4580]: I0112 13:32:02.029292 4580 generic.go:334] "Generic (PLEG): container finished" podID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerID="aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d" exitCode=0 Jan 12 13:32:02 crc kubenswrapper[4580]: I0112 13:32:02.029330 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khfwl" event={"ID":"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1","Type":"ContainerDied","Data":"aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d"} Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.032265 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8lnk9"] Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.039009 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" event={"ID":"553cda0e-1691-4748-8a47-d34d8600ea2e","Type":"ContainerStarted","Data":"64ef94b17a783b3ca93e9519dc0d851d19c2225ca2a0e48683c4594847185ff8"} Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.041861 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lsz48" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerName="registry-server" containerID="cri-o://78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb" gracePeriod=2 Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.042049 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khfwl" event={"ID":"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1","Type":"ContainerStarted","Data":"04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410"} Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.042916 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8lnk9"] Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.059515 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" podStartSLOduration=2.030303615 podStartE2EDuration="3.059506766s" podCreationTimestamp="2026-01-12 13:32:00 +0000 UTC" firstStartedPulling="2026-01-12 13:32:00.93078165 +0000 UTC m=+1519.975000340" lastFinishedPulling="2026-01-12 13:32:01.959984801 +0000 UTC m=+1521.004203491" observedRunningTime="2026-01-12 13:32:03.054569677 +0000 UTC m=+1522.098788366" watchObservedRunningTime="2026-01-12 13:32:03.059506766 +0000 UTC m=+1522.103725456" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.070501 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-khfwl" podStartSLOduration=2.400778605 podStartE2EDuration="7.070494347s" podCreationTimestamp="2026-01-12 13:31:56 +0000 UTC" firstStartedPulling="2026-01-12 13:31:57.989293362 +0000 UTC m=+1517.033512051" lastFinishedPulling="2026-01-12 13:32:02.659009104 +0000 UTC m=+1521.703227793" observedRunningTime="2026-01-12 13:32:03.069816152 +0000 UTC m=+1522.114034842" watchObservedRunningTime="2026-01-12 13:32:03.070494347 +0000 UTC m=+1522.114713037" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.292305 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d306f46-ea22-4b07-a18c-5134b125fa49" path="/var/lib/kubelet/pods/7d306f46-ea22-4b07-a18c-5134b125fa49/volumes" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.462872 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.584937 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hllj\" (UniqueName: \"kubernetes.io/projected/e6980755-f603-4a2b-b82e-45018cbd55d0-kube-api-access-5hllj\") pod \"e6980755-f603-4a2b-b82e-45018cbd55d0\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.585174 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-utilities\") pod \"e6980755-f603-4a2b-b82e-45018cbd55d0\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.585605 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-utilities" (OuterVolumeSpecName: "utilities") pod "e6980755-f603-4a2b-b82e-45018cbd55d0" (UID: "e6980755-f603-4a2b-b82e-45018cbd55d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.585668 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-catalog-content\") pod \"e6980755-f603-4a2b-b82e-45018cbd55d0\" (UID: \"e6980755-f603-4a2b-b82e-45018cbd55d0\") " Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.591222 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6980755-f603-4a2b-b82e-45018cbd55d0-kube-api-access-5hllj" (OuterVolumeSpecName: "kube-api-access-5hllj") pod "e6980755-f603-4a2b-b82e-45018cbd55d0" (UID: "e6980755-f603-4a2b-b82e-45018cbd55d0"). InnerVolumeSpecName "kube-api-access-5hllj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.591644 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.626991 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6980755-f603-4a2b-b82e-45018cbd55d0" (UID: "e6980755-f603-4a2b-b82e-45018cbd55d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.694645 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6980755-f603-4a2b-b82e-45018cbd55d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:03 crc kubenswrapper[4580]: I0112 13:32:03.694688 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hllj\" (UniqueName: \"kubernetes.io/projected/e6980755-f603-4a2b-b82e-45018cbd55d0-kube-api-access-5hllj\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.054353 4580 generic.go:334] "Generic (PLEG): container finished" podID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerID="78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb" exitCode=0 Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.054419 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsz48" event={"ID":"e6980755-f603-4a2b-b82e-45018cbd55d0","Type":"ContainerDied","Data":"78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb"} Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.054489 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsz48" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.054747 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsz48" event={"ID":"e6980755-f603-4a2b-b82e-45018cbd55d0","Type":"ContainerDied","Data":"28619c414c9afd1dc0f857c035271a7f649dc8b89238a001fb89b1942e037221"} Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.054790 4580 scope.go:117] "RemoveContainer" containerID="78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.073934 4580 scope.go:117] "RemoveContainer" containerID="1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.084385 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsz48"] Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.089713 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lsz48"] Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.099602 4580 scope.go:117] "RemoveContainer" containerID="a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.137229 4580 scope.go:117] "RemoveContainer" containerID="78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb" Jan 12 13:32:04 crc kubenswrapper[4580]: E0112 13:32:04.137727 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb\": container with ID starting with 78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb not found: ID does not exist" containerID="78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.137761 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb"} err="failed to get container status \"78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb\": rpc error: code = NotFound desc = could not find container \"78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb\": container with ID starting with 78f1a54912927a772dbb312bddd6757472630feba5af2c1ca2bcb8ece11bf1cb not found: ID does not exist" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.137784 4580 scope.go:117] "RemoveContainer" containerID="1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a" Jan 12 13:32:04 crc kubenswrapper[4580]: E0112 13:32:04.138123 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a\": container with ID starting with 1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a not found: ID does not exist" containerID="1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.138155 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a"} err="failed to get container status \"1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a\": rpc error: code = NotFound desc = could not find container \"1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a\": container with ID starting with 1ac17b7a1f2432237d02528d099fb49dc558909c15d855e2be65d1168a2eb49a not found: ID does not exist" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.138169 4580 scope.go:117] "RemoveContainer" containerID="a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2" Jan 12 13:32:04 crc kubenswrapper[4580]: E0112 13:32:04.138469 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2\": container with ID starting with a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2 not found: ID does not exist" containerID="a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2" Jan 12 13:32:04 crc kubenswrapper[4580]: I0112 13:32:04.138506 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2"} err="failed to get container status \"a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2\": rpc error: code = NotFound desc = could not find container \"a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2\": container with ID starting with a8655f6251866e89b6429c181f2b42e0ff7d45af918c3b6a7bbb4a67abcd73a2 not found: ID does not exist" Jan 12 13:32:05 crc kubenswrapper[4580]: I0112 13:32:05.291381 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" path="/var/lib/kubelet/pods/e6980755-f603-4a2b-b82e-45018cbd55d0/volumes" Jan 12 13:32:06 crc kubenswrapper[4580]: I0112 13:32:06.078064 4580 generic.go:334] "Generic (PLEG): container finished" podID="553cda0e-1691-4748-8a47-d34d8600ea2e" containerID="64ef94b17a783b3ca93e9519dc0d851d19c2225ca2a0e48683c4594847185ff8" exitCode=0 Jan 12 13:32:06 crc kubenswrapper[4580]: I0112 13:32:06.078162 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" event={"ID":"553cda0e-1691-4748-8a47-d34d8600ea2e","Type":"ContainerDied","Data":"64ef94b17a783b3ca93e9519dc0d851d19c2225ca2a0e48683c4594847185ff8"} Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.106301 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.106420 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.477119 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.586467 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-inventory\") pod \"553cda0e-1691-4748-8a47-d34d8600ea2e\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.586556 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jzhf\" (UniqueName: \"kubernetes.io/projected/553cda0e-1691-4748-8a47-d34d8600ea2e-kube-api-access-8jzhf\") pod \"553cda0e-1691-4748-8a47-d34d8600ea2e\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.586634 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-ssh-key-openstack-edpm-ipam\") pod \"553cda0e-1691-4748-8a47-d34d8600ea2e\" (UID: \"553cda0e-1691-4748-8a47-d34d8600ea2e\") " Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.593975 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553cda0e-1691-4748-8a47-d34d8600ea2e-kube-api-access-8jzhf" (OuterVolumeSpecName: "kube-api-access-8jzhf") pod "553cda0e-1691-4748-8a47-d34d8600ea2e" (UID: "553cda0e-1691-4748-8a47-d34d8600ea2e"). InnerVolumeSpecName "kube-api-access-8jzhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.611743 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-inventory" (OuterVolumeSpecName: "inventory") pod "553cda0e-1691-4748-8a47-d34d8600ea2e" (UID: "553cda0e-1691-4748-8a47-d34d8600ea2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.614610 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "553cda0e-1691-4748-8a47-d34d8600ea2e" (UID: "553cda0e-1691-4748-8a47-d34d8600ea2e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.690264 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.690295 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jzhf\" (UniqueName: \"kubernetes.io/projected/553cda0e-1691-4748-8a47-d34d8600ea2e-kube-api-access-8jzhf\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:07 crc kubenswrapper[4580]: I0112 13:32:07.690309 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/553cda0e-1691-4748-8a47-d34d8600ea2e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.039190 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-blrls"] Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.048823 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-blrls"] Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.102161 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" event={"ID":"553cda0e-1691-4748-8a47-d34d8600ea2e","Type":"ContainerDied","Data":"5260a48c2e8b2939902e62739b6faf22f9e6600ae32e1f0cb7315fa3a356307b"} Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.102196 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.102211 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5260a48c2e8b2939902e62739b6faf22f9e6600ae32e1f0cb7315fa3a356307b" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.154672 4580 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-khfwl" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="registry-server" probeResult="failure" output=< Jan 12 13:32:08 crc kubenswrapper[4580]: timeout: failed to connect service ":50051" within 1s Jan 12 13:32:08 crc kubenswrapper[4580]: > Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.158002 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt"] Jan 12 13:32:08 crc kubenswrapper[4580]: E0112 13:32:08.158504 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerName="registry-server" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.158564 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerName="registry-server" Jan 12 13:32:08 crc kubenswrapper[4580]: E0112 13:32:08.158639 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerName="extract-content" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.158694 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerName="extract-content" Jan 12 13:32:08 crc kubenswrapper[4580]: E0112 13:32:08.158769 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerName="extract-utilities" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.158812 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerName="extract-utilities" Jan 12 13:32:08 crc kubenswrapper[4580]: E0112 13:32:08.158865 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553cda0e-1691-4748-8a47-d34d8600ea2e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.158906 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="553cda0e-1691-4748-8a47-d34d8600ea2e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.159132 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="553cda0e-1691-4748-8a47-d34d8600ea2e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.159209 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6980755-f603-4a2b-b82e-45018cbd55d0" containerName="registry-server" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.159869 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.162462 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.162462 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.162806 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.162920 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.167924 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt"] Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.305690 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c42w2\" (UniqueName: \"kubernetes.io/projected/340ac203-3af7-4abd-b75c-bf97009c24e9-kube-api-access-c42w2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.305900 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.306163 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.408114 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c42w2\" (UniqueName: \"kubernetes.io/projected/340ac203-3af7-4abd-b75c-bf97009c24e9-kube-api-access-c42w2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.408172 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.408288 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.413952 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.414376 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.424082 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c42w2\" (UniqueName: \"kubernetes.io/projected/340ac203-3af7-4abd-b75c-bf97009c24e9-kube-api-access-c42w2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cfvt\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.478324 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:08 crc kubenswrapper[4580]: I0112 13:32:08.979436 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt"] Jan 12 13:32:09 crc kubenswrapper[4580]: I0112 13:32:09.032424 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-mmlfs"] Jan 12 13:32:09 crc kubenswrapper[4580]: I0112 13:32:09.038858 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-mmlfs"] Jan 12 13:32:09 crc kubenswrapper[4580]: I0112 13:32:09.112271 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" event={"ID":"340ac203-3af7-4abd-b75c-bf97009c24e9","Type":"ContainerStarted","Data":"6c25cdec977e5411a8bce6181c5ec10e4bb3c61f187db10abb3fa0dbf0be20a9"} Jan 12 13:32:09 crc kubenswrapper[4580]: I0112 13:32:09.291776 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cbf5c7d-9220-43a8-9015-1c52d0c3855f" path="/var/lib/kubelet/pods/3cbf5c7d-9220-43a8-9015-1c52d0c3855f/volumes" Jan 12 13:32:09 crc kubenswrapper[4580]: I0112 13:32:09.292450 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="702612c1-966a-4293-b0dc-05901a325794" path="/var/lib/kubelet/pods/702612c1-966a-4293-b0dc-05901a325794/volumes" Jan 12 13:32:10 crc kubenswrapper[4580]: I0112 13:32:10.121971 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" event={"ID":"340ac203-3af7-4abd-b75c-bf97009c24e9","Type":"ContainerStarted","Data":"9076c7a4937915a4a11cdc220fe9973329bdbcb88d9edf1e398c80acccd41329"} Jan 12 13:32:10 crc kubenswrapper[4580]: I0112 13:32:10.145317 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" podStartSLOduration=1.534804286 podStartE2EDuration="2.145305229s" podCreationTimestamp="2026-01-12 13:32:08 +0000 UTC" firstStartedPulling="2026-01-12 13:32:08.99231946 +0000 UTC m=+1528.036538150" lastFinishedPulling="2026-01-12 13:32:09.602820402 +0000 UTC m=+1528.647039093" observedRunningTime="2026-01-12 13:32:10.136077947 +0000 UTC m=+1529.180296637" watchObservedRunningTime="2026-01-12 13:32:10.145305229 +0000 UTC m=+1529.189523919" Jan 12 13:32:17 crc kubenswrapper[4580]: I0112 13:32:17.141052 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:32:17 crc kubenswrapper[4580]: I0112 13:32:17.179040 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:32:17 crc kubenswrapper[4580]: I0112 13:32:17.371588 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khfwl"] Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.181124 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-khfwl" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="registry-server" containerID="cri-o://04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410" gracePeriod=2 Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.565673 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.700589 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62vbm\" (UniqueName: \"kubernetes.io/projected/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-kube-api-access-62vbm\") pod \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.700667 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-utilities\") pod \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.700918 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-catalog-content\") pod \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\" (UID: \"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1\") " Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.701395 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-utilities" (OuterVolumeSpecName: "utilities") pod "2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" (UID: "2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.707019 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-kube-api-access-62vbm" (OuterVolumeSpecName: "kube-api-access-62vbm") pod "2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" (UID: "2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1"). InnerVolumeSpecName "kube-api-access-62vbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.792635 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" (UID: "2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.803114 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.803147 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62vbm\" (UniqueName: \"kubernetes.io/projected/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-kube-api-access-62vbm\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:18 crc kubenswrapper[4580]: I0112 13:32:18.803161 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.190400 4580 generic.go:334] "Generic (PLEG): container finished" podID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerID="04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410" exitCode=0 Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.190461 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khfwl" event={"ID":"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1","Type":"ContainerDied","Data":"04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410"} Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.190475 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khfwl" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.190491 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khfwl" event={"ID":"2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1","Type":"ContainerDied","Data":"bc00a36be65097224e03daf28e48395b663a38eb68096aae5fb1cd24b503c73f"} Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.190526 4580 scope.go:117] "RemoveContainer" containerID="04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.210452 4580 scope.go:117] "RemoveContainer" containerID="aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.218892 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khfwl"] Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.226267 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-khfwl"] Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.244026 4580 scope.go:117] "RemoveContainer" containerID="688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.267469 4580 scope.go:117] "RemoveContainer" containerID="04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410" Jan 12 13:32:19 crc kubenswrapper[4580]: E0112 13:32:19.267856 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410\": container with ID starting with 04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410 not found: ID does not exist" containerID="04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.267915 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410"} err="failed to get container status \"04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410\": rpc error: code = NotFound desc = could not find container \"04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410\": container with ID starting with 04ac75554e2007a18ef7b71c9d92b1f9d83a301e1061f11302624af02cb27410 not found: ID does not exist" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.267962 4580 scope.go:117] "RemoveContainer" containerID="aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d" Jan 12 13:32:19 crc kubenswrapper[4580]: E0112 13:32:19.268385 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d\": container with ID starting with aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d not found: ID does not exist" containerID="aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.268419 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d"} err="failed to get container status \"aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d\": rpc error: code = NotFound desc = could not find container \"aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d\": container with ID starting with aae073662428c9d6a496f170f3d278537d25d133d5ee461af49d076c0e685a8d not found: ID does not exist" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.268440 4580 scope.go:117] "RemoveContainer" containerID="688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347" Jan 12 13:32:19 crc kubenswrapper[4580]: E0112 13:32:19.268753 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347\": container with ID starting with 688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347 not found: ID does not exist" containerID="688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.268779 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347"} err="failed to get container status \"688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347\": rpc error: code = NotFound desc = could not find container \"688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347\": container with ID starting with 688e5f744a92864715248ff29a3b936b362ba28ba8ed34118d869635e95e7347 not found: ID does not exist" Jan 12 13:32:19 crc kubenswrapper[4580]: I0112 13:32:19.292448 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" path="/var/lib/kubelet/pods/2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1/volumes" Jan 12 13:32:35 crc kubenswrapper[4580]: I0112 13:32:35.296564 4580 generic.go:334] "Generic (PLEG): container finished" podID="340ac203-3af7-4abd-b75c-bf97009c24e9" containerID="9076c7a4937915a4a11cdc220fe9973329bdbcb88d9edf1e398c80acccd41329" exitCode=0 Jan 12 13:32:35 crc kubenswrapper[4580]: I0112 13:32:35.296645 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" event={"ID":"340ac203-3af7-4abd-b75c-bf97009c24e9","Type":"ContainerDied","Data":"9076c7a4937915a4a11cdc220fe9973329bdbcb88d9edf1e398c80acccd41329"} Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.642184 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.831429 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-ssh-key-openstack-edpm-ipam\") pod \"340ac203-3af7-4abd-b75c-bf97009c24e9\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.831589 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-inventory\") pod \"340ac203-3af7-4abd-b75c-bf97009c24e9\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.831682 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c42w2\" (UniqueName: \"kubernetes.io/projected/340ac203-3af7-4abd-b75c-bf97009c24e9-kube-api-access-c42w2\") pod \"340ac203-3af7-4abd-b75c-bf97009c24e9\" (UID: \"340ac203-3af7-4abd-b75c-bf97009c24e9\") " Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.837372 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340ac203-3af7-4abd-b75c-bf97009c24e9-kube-api-access-c42w2" (OuterVolumeSpecName: "kube-api-access-c42w2") pod "340ac203-3af7-4abd-b75c-bf97009c24e9" (UID: "340ac203-3af7-4abd-b75c-bf97009c24e9"). InnerVolumeSpecName "kube-api-access-c42w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.854759 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "340ac203-3af7-4abd-b75c-bf97009c24e9" (UID: "340ac203-3af7-4abd-b75c-bf97009c24e9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.855394 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-inventory" (OuterVolumeSpecName: "inventory") pod "340ac203-3af7-4abd-b75c-bf97009c24e9" (UID: "340ac203-3af7-4abd-b75c-bf97009c24e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.933473 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c42w2\" (UniqueName: \"kubernetes.io/projected/340ac203-3af7-4abd-b75c-bf97009c24e9-kube-api-access-c42w2\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.933503 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:36 crc kubenswrapper[4580]: I0112 13:32:36.933513 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/340ac203-3af7-4abd-b75c-bf97009c24e9-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.313038 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" event={"ID":"340ac203-3af7-4abd-b75c-bf97009c24e9","Type":"ContainerDied","Data":"6c25cdec977e5411a8bce6181c5ec10e4bb3c61f187db10abb3fa0dbf0be20a9"} Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.313074 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c25cdec977e5411a8bce6181c5ec10e4bb3c61f187db10abb3fa0dbf0be20a9" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.313059 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cfvt" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.364701 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw"] Jan 12 13:32:37 crc kubenswrapper[4580]: E0112 13:32:37.365205 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="extract-utilities" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.365225 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="extract-utilities" Jan 12 13:32:37 crc kubenswrapper[4580]: E0112 13:32:37.365237 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340ac203-3af7-4abd-b75c-bf97009c24e9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.365245 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="340ac203-3af7-4abd-b75c-bf97009c24e9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:37 crc kubenswrapper[4580]: E0112 13:32:37.365256 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="extract-content" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.365263 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="extract-content" Jan 12 13:32:37 crc kubenswrapper[4580]: E0112 13:32:37.365280 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="registry-server" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.365286 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="registry-server" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.365467 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d38623c-2fbb-4d12-8ec0-a96a5c42fdc1" containerName="registry-server" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.365487 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="340ac203-3af7-4abd-b75c-bf97009c24e9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.366084 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.369201 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.370027 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.370231 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.370497 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.373809 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw"] Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.443192 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwj9b\" (UniqueName: \"kubernetes.io/projected/bd995c62-9850-41cf-91c1-aa47ac294147-kube-api-access-xwj9b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.443415 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.443456 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.544019 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwj9b\" (UniqueName: \"kubernetes.io/projected/bd995c62-9850-41cf-91c1-aa47ac294147-kube-api-access-xwj9b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.544172 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.544202 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.548052 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.548335 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.557389 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwj9b\" (UniqueName: \"kubernetes.io/projected/bd995c62-9850-41cf-91c1-aa47ac294147-kube-api-access-xwj9b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:37 crc kubenswrapper[4580]: I0112 13:32:37.679357 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:32:38 crc kubenswrapper[4580]: I0112 13:32:38.192287 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw"] Jan 12 13:32:38 crc kubenswrapper[4580]: I0112 13:32:38.322111 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" event={"ID":"bd995c62-9850-41cf-91c1-aa47ac294147","Type":"ContainerStarted","Data":"1d7998c14355ee575815911f9294a025090c3368590924b78b2a7635501d86c4"} Jan 12 13:32:39 crc kubenswrapper[4580]: I0112 13:32:39.330699 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" event={"ID":"bd995c62-9850-41cf-91c1-aa47ac294147","Type":"ContainerStarted","Data":"d176ba527a1053833749205100cd955dca9653c16cfdb8ce9be4f225ace24fb8"} Jan 12 13:32:39 crc kubenswrapper[4580]: I0112 13:32:39.346384 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" podStartSLOduration=1.816970996 podStartE2EDuration="2.346369768s" podCreationTimestamp="2026-01-12 13:32:37 +0000 UTC" firstStartedPulling="2026-01-12 13:32:38.200251835 +0000 UTC m=+1557.244470525" lastFinishedPulling="2026-01-12 13:32:38.729650608 +0000 UTC m=+1557.773869297" observedRunningTime="2026-01-12 13:32:39.339951304 +0000 UTC m=+1558.384169993" watchObservedRunningTime="2026-01-12 13:32:39.346369768 +0000 UTC m=+1558.390588457" Jan 12 13:32:55 crc kubenswrapper[4580]: I0112 13:32:55.659908 4580 scope.go:117] "RemoveContainer" containerID="f22816032ca3f146f987bb95a8cf9b28010210c2f7936eb1ca2f8f7a56a04d49" Jan 12 13:32:55 crc kubenswrapper[4580]: I0112 13:32:55.694258 4580 scope.go:117] "RemoveContainer" containerID="2acfe507773f35d2c2bfcd63687b7c20dec48bb7a0b779a36b7881a1fd8cd444" Jan 12 13:32:55 crc kubenswrapper[4580]: I0112 13:32:55.741334 4580 scope.go:117] "RemoveContainer" containerID="698cdb1aaa8b6c445236171e6a6b8117e4da8fae97df9c89d16885470d435ad6" Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.035212 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-gdmnx"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.040828 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-6wrc4"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.045971 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-gdmnx"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.051650 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-6wrc4"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.057830 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c19d-account-create-update-bz5j6"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.062820 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2b89-account-create-update-844zz"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.069871 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kzsjf"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.077491 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1d39-account-create-update-dd8gr"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.082976 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c19d-account-create-update-bz5j6"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.087702 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1d39-account-create-update-dd8gr"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.092631 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kzsjf"] Jan 12 13:33:04 crc kubenswrapper[4580]: I0112 13:33:04.097142 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2b89-account-create-update-844zz"] Jan 12 13:33:05 crc kubenswrapper[4580]: I0112 13:33:05.290376 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c9631d5-0489-43f6-a144-5869fc41f5ba" path="/var/lib/kubelet/pods/8c9631d5-0489-43f6-a144-5869fc41f5ba/volumes" Jan 12 13:33:05 crc kubenswrapper[4580]: I0112 13:33:05.291199 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5680228-c2e3-437e-b28e-bfd73513f81d" path="/var/lib/kubelet/pods/a5680228-c2e3-437e-b28e-bfd73513f81d/volumes" Jan 12 13:33:05 crc kubenswrapper[4580]: I0112 13:33:05.291798 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0" path="/var/lib/kubelet/pods/b2d26c30-1a0e-4ab2-af0a-84ca0f4280f0/volumes" Jan 12 13:33:05 crc kubenswrapper[4580]: I0112 13:33:05.292413 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caab40d7-1666-4a92-ba59-a9241ce91657" path="/var/lib/kubelet/pods/caab40d7-1666-4a92-ba59-a9241ce91657/volumes" Jan 12 13:33:05 crc kubenswrapper[4580]: I0112 13:33:05.293507 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd386d1-8b6e-4834-8ea5-74fe561d15f6" path="/var/lib/kubelet/pods/ddd386d1-8b6e-4834-8ea5-74fe561d15f6/volumes" Jan 12 13:33:05 crc kubenswrapper[4580]: I0112 13:33:05.294037 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feda8d3e-afea-4925-b154-4f13512cae76" path="/var/lib/kubelet/pods/feda8d3e-afea-4925-b154-4f13512cae76/volumes" Jan 12 13:33:14 crc kubenswrapper[4580]: I0112 13:33:14.607687 4580 generic.go:334] "Generic (PLEG): container finished" podID="bd995c62-9850-41cf-91c1-aa47ac294147" containerID="d176ba527a1053833749205100cd955dca9653c16cfdb8ce9be4f225ace24fb8" exitCode=0 Jan 12 13:33:14 crc kubenswrapper[4580]: I0112 13:33:14.607776 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" event={"ID":"bd995c62-9850-41cf-91c1-aa47ac294147","Type":"ContainerDied","Data":"d176ba527a1053833749205100cd955dca9653c16cfdb8ce9be4f225ace24fb8"} Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.499723 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gv9f9"] Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.507977 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.509699 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gv9f9"] Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.637704 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9vj\" (UniqueName: \"kubernetes.io/projected/0123f59b-54c3-4dae-8d99-b2abf2e89267-kube-api-access-vg9vj\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.637765 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-catalog-content\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.637903 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-utilities\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.739476 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg9vj\" (UniqueName: \"kubernetes.io/projected/0123f59b-54c3-4dae-8d99-b2abf2e89267-kube-api-access-vg9vj\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.739530 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-catalog-content\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.739638 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-utilities\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.740076 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-catalog-content\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.740128 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-utilities\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.759907 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg9vj\" (UniqueName: \"kubernetes.io/projected/0123f59b-54c3-4dae-8d99-b2abf2e89267-kube-api-access-vg9vj\") pod \"community-operators-gv9f9\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:15 crc kubenswrapper[4580]: I0112 13:33:15.848390 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.038924 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.148151 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-ssh-key-openstack-edpm-ipam\") pod \"bd995c62-9850-41cf-91c1-aa47ac294147\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.148523 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-inventory\") pod \"bd995c62-9850-41cf-91c1-aa47ac294147\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.148698 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwj9b\" (UniqueName: \"kubernetes.io/projected/bd995c62-9850-41cf-91c1-aa47ac294147-kube-api-access-xwj9b\") pod \"bd995c62-9850-41cf-91c1-aa47ac294147\" (UID: \"bd995c62-9850-41cf-91c1-aa47ac294147\") " Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.154196 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd995c62-9850-41cf-91c1-aa47ac294147-kube-api-access-xwj9b" (OuterVolumeSpecName: "kube-api-access-xwj9b") pod "bd995c62-9850-41cf-91c1-aa47ac294147" (UID: "bd995c62-9850-41cf-91c1-aa47ac294147"). InnerVolumeSpecName "kube-api-access-xwj9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.172260 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bd995c62-9850-41cf-91c1-aa47ac294147" (UID: "bd995c62-9850-41cf-91c1-aa47ac294147"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.172531 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-inventory" (OuterVolumeSpecName: "inventory") pod "bd995c62-9850-41cf-91c1-aa47ac294147" (UID: "bd995c62-9850-41cf-91c1-aa47ac294147"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.251739 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.251776 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwj9b\" (UniqueName: \"kubernetes.io/projected/bd995c62-9850-41cf-91c1-aa47ac294147-kube-api-access-xwj9b\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.251789 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd995c62-9850-41cf-91c1-aa47ac294147-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.338646 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gv9f9"] Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.626938 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" event={"ID":"bd995c62-9850-41cf-91c1-aa47ac294147","Type":"ContainerDied","Data":"1d7998c14355ee575815911f9294a025090c3368590924b78b2a7635501d86c4"} Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.627251 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7998c14355ee575815911f9294a025090c3368590924b78b2a7635501d86c4" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.627024 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.629767 4580 generic.go:334] "Generic (PLEG): container finished" podID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerID="c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732" exitCode=0 Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.629833 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv9f9" event={"ID":"0123f59b-54c3-4dae-8d99-b2abf2e89267","Type":"ContainerDied","Data":"c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732"} Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.629932 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv9f9" event={"ID":"0123f59b-54c3-4dae-8d99-b2abf2e89267","Type":"ContainerStarted","Data":"46b27efc4dfd0a5c7bbd3587da8613dd92b09f1fcc0d196c26de07c18661680a"} Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.700413 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9cvjw"] Jan 12 13:33:16 crc kubenswrapper[4580]: E0112 13:33:16.700771 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd995c62-9850-41cf-91c1-aa47ac294147" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.700784 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd995c62-9850-41cf-91c1-aa47ac294147" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.700982 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd995c62-9850-41cf-91c1-aa47ac294147" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.701603 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.703382 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.703419 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.703448 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.703630 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.708881 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9cvjw"] Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.865507 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.865857 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7g4\" (UniqueName: \"kubernetes.io/projected/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-kube-api-access-xx7g4\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.866423 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.968611 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.968691 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.968785 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx7g4\" (UniqueName: \"kubernetes.io/projected/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-kube-api-access-xx7g4\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.974340 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.974370 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:16 crc kubenswrapper[4580]: I0112 13:33:16.986703 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx7g4\" (UniqueName: \"kubernetes.io/projected/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-kube-api-access-xx7g4\") pod \"ssh-known-hosts-edpm-deployment-9cvjw\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:17 crc kubenswrapper[4580]: I0112 13:33:17.015991 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:17 crc kubenswrapper[4580]: I0112 13:33:17.483672 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9cvjw"] Jan 12 13:33:17 crc kubenswrapper[4580]: W0112 13:33:17.488114 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod212e2cae_eea9_4f9c_a1f0_87708f00ab9a.slice/crio-9b14c5d8831f996ab484cea24fe6a80d9bce0fa3af653d60ff5f9edab39c6eae WatchSource:0}: Error finding container 9b14c5d8831f996ab484cea24fe6a80d9bce0fa3af653d60ff5f9edab39c6eae: Status 404 returned error can't find the container with id 9b14c5d8831f996ab484cea24fe6a80d9bce0fa3af653d60ff5f9edab39c6eae Jan 12 13:33:17 crc kubenswrapper[4580]: I0112 13:33:17.640811 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv9f9" event={"ID":"0123f59b-54c3-4dae-8d99-b2abf2e89267","Type":"ContainerStarted","Data":"80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed"} Jan 12 13:33:17 crc kubenswrapper[4580]: I0112 13:33:17.643602 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" event={"ID":"212e2cae-eea9-4f9c-a1f0-87708f00ab9a","Type":"ContainerStarted","Data":"9b14c5d8831f996ab484cea24fe6a80d9bce0fa3af653d60ff5f9edab39c6eae"} Jan 12 13:33:18 crc kubenswrapper[4580]: I0112 13:33:18.671211 4580 generic.go:334] "Generic (PLEG): container finished" podID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerID="80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed" exitCode=0 Jan 12 13:33:18 crc kubenswrapper[4580]: I0112 13:33:18.671330 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv9f9" event={"ID":"0123f59b-54c3-4dae-8d99-b2abf2e89267","Type":"ContainerDied","Data":"80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed"} Jan 12 13:33:18 crc kubenswrapper[4580]: I0112 13:33:18.675207 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" event={"ID":"212e2cae-eea9-4f9c-a1f0-87708f00ab9a","Type":"ContainerStarted","Data":"bf2a9256495469f435c9572b9a6efff9d833d6805cde69bb57b47ef175360276"} Jan 12 13:33:18 crc kubenswrapper[4580]: I0112 13:33:18.716792 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" podStartSLOduration=2.146437372 podStartE2EDuration="2.716756803s" podCreationTimestamp="2026-01-12 13:33:16 +0000 UTC" firstStartedPulling="2026-01-12 13:33:17.490235102 +0000 UTC m=+1596.534453793" lastFinishedPulling="2026-01-12 13:33:18.060554534 +0000 UTC m=+1597.104773224" observedRunningTime="2026-01-12 13:33:18.708506074 +0000 UTC m=+1597.752724765" watchObservedRunningTime="2026-01-12 13:33:18.716756803 +0000 UTC m=+1597.760975482" Jan 12 13:33:19 crc kubenswrapper[4580]: I0112 13:33:19.686904 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv9f9" event={"ID":"0123f59b-54c3-4dae-8d99-b2abf2e89267","Type":"ContainerStarted","Data":"24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f"} Jan 12 13:33:19 crc kubenswrapper[4580]: I0112 13:33:19.707918 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gv9f9" podStartSLOduration=1.953438096 podStartE2EDuration="4.707897966s" podCreationTimestamp="2026-01-12 13:33:15 +0000 UTC" firstStartedPulling="2026-01-12 13:33:16.63219989 +0000 UTC m=+1595.676418570" lastFinishedPulling="2026-01-12 13:33:19.38665975 +0000 UTC m=+1598.430878440" observedRunningTime="2026-01-12 13:33:19.705071694 +0000 UTC m=+1598.749290383" watchObservedRunningTime="2026-01-12 13:33:19.707897966 +0000 UTC m=+1598.752116656" Jan 12 13:33:22 crc kubenswrapper[4580]: I0112 13:33:22.056736 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vq8g"] Jan 12 13:33:22 crc kubenswrapper[4580]: I0112 13:33:22.063457 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4vq8g"] Jan 12 13:33:23 crc kubenswrapper[4580]: I0112 13:33:23.293724 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8ee3a7-b81f-4ff7-9647-2eb207531a43" path="/var/lib/kubelet/pods/ed8ee3a7-b81f-4ff7-9647-2eb207531a43/volumes" Jan 12 13:33:23 crc kubenswrapper[4580]: I0112 13:33:23.745515 4580 generic.go:334] "Generic (PLEG): container finished" podID="212e2cae-eea9-4f9c-a1f0-87708f00ab9a" containerID="bf2a9256495469f435c9572b9a6efff9d833d6805cde69bb57b47ef175360276" exitCode=0 Jan 12 13:33:23 crc kubenswrapper[4580]: I0112 13:33:23.745590 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" event={"ID":"212e2cae-eea9-4f9c-a1f0-87708f00ab9a","Type":"ContainerDied","Data":"bf2a9256495469f435c9572b9a6efff9d833d6805cde69bb57b47ef175360276"} Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.080875 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.254647 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-ssh-key-openstack-edpm-ipam\") pod \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.254968 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx7g4\" (UniqueName: \"kubernetes.io/projected/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-kube-api-access-xx7g4\") pod \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.255083 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-inventory-0\") pod \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\" (UID: \"212e2cae-eea9-4f9c-a1f0-87708f00ab9a\") " Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.260583 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-kube-api-access-xx7g4" (OuterVolumeSpecName: "kube-api-access-xx7g4") pod "212e2cae-eea9-4f9c-a1f0-87708f00ab9a" (UID: "212e2cae-eea9-4f9c-a1f0-87708f00ab9a"). InnerVolumeSpecName "kube-api-access-xx7g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.278166 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "212e2cae-eea9-4f9c-a1f0-87708f00ab9a" (UID: "212e2cae-eea9-4f9c-a1f0-87708f00ab9a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.283416 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "212e2cae-eea9-4f9c-a1f0-87708f00ab9a" (UID: "212e2cae-eea9-4f9c-a1f0-87708f00ab9a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.357874 4580 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.358003 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.358082 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx7g4\" (UniqueName: \"kubernetes.io/projected/212e2cae-eea9-4f9c-a1f0-87708f00ab9a-kube-api-access-xx7g4\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.768060 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" event={"ID":"212e2cae-eea9-4f9c-a1f0-87708f00ab9a","Type":"ContainerDied","Data":"9b14c5d8831f996ab484cea24fe6a80d9bce0fa3af653d60ff5f9edab39c6eae"} Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.768157 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b14c5d8831f996ab484cea24fe6a80d9bce0fa3af653d60ff5f9edab39c6eae" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.768181 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9cvjw" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.824966 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl"] Jan 12 13:33:25 crc kubenswrapper[4580]: E0112 13:33:25.825506 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212e2cae-eea9-4f9c-a1f0-87708f00ab9a" containerName="ssh-known-hosts-edpm-deployment" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.825530 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="212e2cae-eea9-4f9c-a1f0-87708f00ab9a" containerName="ssh-known-hosts-edpm-deployment" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.825805 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="212e2cae-eea9-4f9c-a1f0-87708f00ab9a" containerName="ssh-known-hosts-edpm-deployment" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.826567 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.829700 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.829894 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.830041 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.831251 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl"] Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.832546 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.849363 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.850205 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.900925 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.973602 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spq87\" (UniqueName: \"kubernetes.io/projected/0434e0b6-16ec-4821-b1b0-c823fc51a965-kube-api-access-spq87\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.973686 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:25 crc kubenswrapper[4580]: I0112 13:33:25.974035 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.076531 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.076671 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.076954 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spq87\" (UniqueName: \"kubernetes.io/projected/0434e0b6-16ec-4821-b1b0-c823fc51a965-kube-api-access-spq87\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.081639 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.081700 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.092051 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spq87\" (UniqueName: \"kubernetes.io/projected/0434e0b6-16ec-4821-b1b0-c823fc51a965-kube-api-access-spq87\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nbqdl\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.158616 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.631866 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl"] Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.778537 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" event={"ID":"0434e0b6-16ec-4821-b1b0-c823fc51a965","Type":"ContainerStarted","Data":"33d9d489078e81c11e05e035b81389b2918aabb687baeaa7613762ae30cd3a28"} Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.824165 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:26 crc kubenswrapper[4580]: I0112 13:33:26.876864 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gv9f9"] Jan 12 13:33:27 crc kubenswrapper[4580]: I0112 13:33:27.791320 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" event={"ID":"0434e0b6-16ec-4821-b1b0-c823fc51a965","Type":"ContainerStarted","Data":"06fa9912e1e02fde1a5157d86fb5061cab9b73844c1db27651c6272c23c32267"} Jan 12 13:33:27 crc kubenswrapper[4580]: I0112 13:33:27.816704 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" podStartSLOduration=2.153662223 podStartE2EDuration="2.81668459s" podCreationTimestamp="2026-01-12 13:33:25 +0000 UTC" firstStartedPulling="2026-01-12 13:33:26.63667597 +0000 UTC m=+1605.680894660" lastFinishedPulling="2026-01-12 13:33:27.299698347 +0000 UTC m=+1606.343917027" observedRunningTime="2026-01-12 13:33:27.806628098 +0000 UTC m=+1606.850846788" watchObservedRunningTime="2026-01-12 13:33:27.81668459 +0000 UTC m=+1606.860903280" Jan 12 13:33:28 crc kubenswrapper[4580]: I0112 13:33:28.803045 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gv9f9" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerName="registry-server" containerID="cri-o://24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f" gracePeriod=2 Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.195963 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.345696 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-catalog-content\") pod \"0123f59b-54c3-4dae-8d99-b2abf2e89267\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.346743 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg9vj\" (UniqueName: \"kubernetes.io/projected/0123f59b-54c3-4dae-8d99-b2abf2e89267-kube-api-access-vg9vj\") pod \"0123f59b-54c3-4dae-8d99-b2abf2e89267\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.346803 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-utilities\") pod \"0123f59b-54c3-4dae-8d99-b2abf2e89267\" (UID: \"0123f59b-54c3-4dae-8d99-b2abf2e89267\") " Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.347764 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-utilities" (OuterVolumeSpecName: "utilities") pod "0123f59b-54c3-4dae-8d99-b2abf2e89267" (UID: "0123f59b-54c3-4dae-8d99-b2abf2e89267"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.354460 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0123f59b-54c3-4dae-8d99-b2abf2e89267-kube-api-access-vg9vj" (OuterVolumeSpecName: "kube-api-access-vg9vj") pod "0123f59b-54c3-4dae-8d99-b2abf2e89267" (UID: "0123f59b-54c3-4dae-8d99-b2abf2e89267"). InnerVolumeSpecName "kube-api-access-vg9vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.387581 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0123f59b-54c3-4dae-8d99-b2abf2e89267" (UID: "0123f59b-54c3-4dae-8d99-b2abf2e89267"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.451076 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.451160 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg9vj\" (UniqueName: \"kubernetes.io/projected/0123f59b-54c3-4dae-8d99-b2abf2e89267-kube-api-access-vg9vj\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.451181 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0123f59b-54c3-4dae-8d99-b2abf2e89267-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.817002 4580 generic.go:334] "Generic (PLEG): container finished" podID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerID="24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f" exitCode=0 Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.817074 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv9f9" event={"ID":"0123f59b-54c3-4dae-8d99-b2abf2e89267","Type":"ContainerDied","Data":"24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f"} Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.817141 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gv9f9" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.817166 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv9f9" event={"ID":"0123f59b-54c3-4dae-8d99-b2abf2e89267","Type":"ContainerDied","Data":"46b27efc4dfd0a5c7bbd3587da8613dd92b09f1fcc0d196c26de07c18661680a"} Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.817204 4580 scope.go:117] "RemoveContainer" containerID="24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.841250 4580 scope.go:117] "RemoveContainer" containerID="80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.854227 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gv9f9"] Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.860591 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gv9f9"] Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.867612 4580 scope.go:117] "RemoveContainer" containerID="c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.892645 4580 scope.go:117] "RemoveContainer" containerID="24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f" Jan 12 13:33:29 crc kubenswrapper[4580]: E0112 13:33:29.894050 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f\": container with ID starting with 24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f not found: ID does not exist" containerID="24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.894095 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f"} err="failed to get container status \"24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f\": rpc error: code = NotFound desc = could not find container \"24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f\": container with ID starting with 24ca36ce08e960b9f566bc2fa8e555a0da66c6ee213a65463b4fd28b9dc7c56f not found: ID does not exist" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.894141 4580 scope.go:117] "RemoveContainer" containerID="80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed" Jan 12 13:33:29 crc kubenswrapper[4580]: E0112 13:33:29.894508 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed\": container with ID starting with 80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed not found: ID does not exist" containerID="80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.894531 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed"} err="failed to get container status \"80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed\": rpc error: code = NotFound desc = could not find container \"80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed\": container with ID starting with 80607be19ed073dadc0f7e810a31999b7005d1d056f022fc09b6ac989a4f36ed not found: ID does not exist" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.894546 4580 scope.go:117] "RemoveContainer" containerID="c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732" Jan 12 13:33:29 crc kubenswrapper[4580]: E0112 13:33:29.894958 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732\": container with ID starting with c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732 not found: ID does not exist" containerID="c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732" Jan 12 13:33:29 crc kubenswrapper[4580]: I0112 13:33:29.895010 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732"} err="failed to get container status \"c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732\": rpc error: code = NotFound desc = could not find container \"c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732\": container with ID starting with c0cffd20e648f82c3cd821067551c161a147194d704a35c94d600ee9d7e3e732 not found: ID does not exist" Jan 12 13:33:31 crc kubenswrapper[4580]: I0112 13:33:31.292430 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" path="/var/lib/kubelet/pods/0123f59b-54c3-4dae-8d99-b2abf2e89267/volumes" Jan 12 13:33:33 crc kubenswrapper[4580]: I0112 13:33:33.858462 4580 generic.go:334] "Generic (PLEG): container finished" podID="0434e0b6-16ec-4821-b1b0-c823fc51a965" containerID="06fa9912e1e02fde1a5157d86fb5061cab9b73844c1db27651c6272c23c32267" exitCode=0 Jan 12 13:33:33 crc kubenswrapper[4580]: I0112 13:33:33.858521 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" event={"ID":"0434e0b6-16ec-4821-b1b0-c823fc51a965","Type":"ContainerDied","Data":"06fa9912e1e02fde1a5157d86fb5061cab9b73844c1db27651c6272c23c32267"} Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.212316 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.392239 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spq87\" (UniqueName: \"kubernetes.io/projected/0434e0b6-16ec-4821-b1b0-c823fc51a965-kube-api-access-spq87\") pod \"0434e0b6-16ec-4821-b1b0-c823fc51a965\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.392326 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-inventory\") pod \"0434e0b6-16ec-4821-b1b0-c823fc51a965\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.392477 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam\") pod \"0434e0b6-16ec-4821-b1b0-c823fc51a965\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.399671 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0434e0b6-16ec-4821-b1b0-c823fc51a965-kube-api-access-spq87" (OuterVolumeSpecName: "kube-api-access-spq87") pod "0434e0b6-16ec-4821-b1b0-c823fc51a965" (UID: "0434e0b6-16ec-4821-b1b0-c823fc51a965"). InnerVolumeSpecName "kube-api-access-spq87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:33:35 crc kubenswrapper[4580]: E0112 13:33:35.414287 4580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam podName:0434e0b6-16ec-4821-b1b0-c823fc51a965 nodeName:}" failed. No retries permitted until 2026-01-12 13:33:35.914252189 +0000 UTC m=+1614.958470880 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam") pod "0434e0b6-16ec-4821-b1b0-c823fc51a965" (UID: "0434e0b6-16ec-4821-b1b0-c823fc51a965") : error deleting /var/lib/kubelet/pods/0434e0b6-16ec-4821-b1b0-c823fc51a965/volume-subpaths: remove /var/lib/kubelet/pods/0434e0b6-16ec-4821-b1b0-c823fc51a965/volume-subpaths: no such file or directory Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.416836 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-inventory" (OuterVolumeSpecName: "inventory") pod "0434e0b6-16ec-4821-b1b0-c823fc51a965" (UID: "0434e0b6-16ec-4821-b1b0-c823fc51a965"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.496146 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spq87\" (UniqueName: \"kubernetes.io/projected/0434e0b6-16ec-4821-b1b0-c823fc51a965-kube-api-access-spq87\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.496186 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.880444 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" event={"ID":"0434e0b6-16ec-4821-b1b0-c823fc51a965","Type":"ContainerDied","Data":"33d9d489078e81c11e05e035b81389b2918aabb687baeaa7613762ae30cd3a28"} Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.880508 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d9d489078e81c11e05e035b81389b2918aabb687baeaa7613762ae30cd3a28" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.880536 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nbqdl" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.944158 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7"] Jan 12 13:33:35 crc kubenswrapper[4580]: E0112 13:33:35.944939 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerName="extract-utilities" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.944964 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerName="extract-utilities" Jan 12 13:33:35 crc kubenswrapper[4580]: E0112 13:33:35.944984 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0434e0b6-16ec-4821-b1b0-c823fc51a965" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.944991 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0434e0b6-16ec-4821-b1b0-c823fc51a965" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:35 crc kubenswrapper[4580]: E0112 13:33:35.945004 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerName="extract-content" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.945010 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerName="extract-content" Jan 12 13:33:35 crc kubenswrapper[4580]: E0112 13:33:35.945023 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerName="registry-server" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.945028 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerName="registry-server" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.945238 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0123f59b-54c3-4dae-8d99-b2abf2e89267" containerName="registry-server" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.945261 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="0434e0b6-16ec-4821-b1b0-c823fc51a965" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.945978 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:35 crc kubenswrapper[4580]: I0112 13:33:35.949962 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7"] Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.006475 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam\") pod \"0434e0b6-16ec-4821-b1b0-c823fc51a965\" (UID: \"0434e0b6-16ec-4821-b1b0-c823fc51a965\") " Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.011892 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0434e0b6-16ec-4821-b1b0-c823fc51a965" (UID: "0434e0b6-16ec-4821-b1b0-c823fc51a965"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.110736 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.112052 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.112307 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwhn5\" (UniqueName: \"kubernetes.io/projected/2e6e07b6-c923-4d65-8bda-8fb27915bb72-kube-api-access-pwhn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.112576 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0434e0b6-16ec-4821-b1b0-c823fc51a965-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.214779 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.214858 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwhn5\" (UniqueName: \"kubernetes.io/projected/2e6e07b6-c923-4d65-8bda-8fb27915bb72-kube-api-access-pwhn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.214936 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.219020 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.219709 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.229684 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwhn5\" (UniqueName: \"kubernetes.io/projected/2e6e07b6-c923-4d65-8bda-8fb27915bb72-kube-api-access-pwhn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.263741 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.741481 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7"] Jan 12 13:33:36 crc kubenswrapper[4580]: I0112 13:33:36.892333 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" event={"ID":"2e6e07b6-c923-4d65-8bda-8fb27915bb72","Type":"ContainerStarted","Data":"6f4ab114d1e6e6ed8155fca6ec0568345f587060b93c4ce7422ad92ed98308c7"} Jan 12 13:33:37 crc kubenswrapper[4580]: I0112 13:33:37.903694 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" event={"ID":"2e6e07b6-c923-4d65-8bda-8fb27915bb72","Type":"ContainerStarted","Data":"237de703256750ec3d4c50decd0f2a6f5af55e557501cc9c09da1889b2d72d78"} Jan 12 13:33:37 crc kubenswrapper[4580]: I0112 13:33:37.927900 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" podStartSLOduration=2.378429698 podStartE2EDuration="2.927884596s" podCreationTimestamp="2026-01-12 13:33:35 +0000 UTC" firstStartedPulling="2026-01-12 13:33:36.747705186 +0000 UTC m=+1615.791923876" lastFinishedPulling="2026-01-12 13:33:37.297160083 +0000 UTC m=+1616.341378774" observedRunningTime="2026-01-12 13:33:37.919635943 +0000 UTC m=+1616.963854633" watchObservedRunningTime="2026-01-12 13:33:37.927884596 +0000 UTC m=+1616.972103286" Jan 12 13:33:42 crc kubenswrapper[4580]: I0112 13:33:42.055211 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kkg8v"] Jan 12 13:33:42 crc kubenswrapper[4580]: I0112 13:33:42.071768 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kkg8v"] Jan 12 13:33:43 crc kubenswrapper[4580]: I0112 13:33:43.029719 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-fsx29"] Jan 12 13:33:43 crc kubenswrapper[4580]: I0112 13:33:43.034577 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-fsx29"] Jan 12 13:33:43 crc kubenswrapper[4580]: I0112 13:33:43.291430 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2027dbc4-0cd9-405d-8f11-9c57de3d47e6" path="/var/lib/kubelet/pods/2027dbc4-0cd9-405d-8f11-9c57de3d47e6/volumes" Jan 12 13:33:43 crc kubenswrapper[4580]: I0112 13:33:43.292067 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256ef446-6309-4088-9d38-35a714a34f9a" path="/var/lib/kubelet/pods/256ef446-6309-4088-9d38-35a714a34f9a/volumes" Jan 12 13:33:44 crc kubenswrapper[4580]: I0112 13:33:44.968527 4580 generic.go:334] "Generic (PLEG): container finished" podID="2e6e07b6-c923-4d65-8bda-8fb27915bb72" containerID="237de703256750ec3d4c50decd0f2a6f5af55e557501cc9c09da1889b2d72d78" exitCode=0 Jan 12 13:33:44 crc kubenswrapper[4580]: I0112 13:33:44.968774 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" event={"ID":"2e6e07b6-c923-4d65-8bda-8fb27915bb72","Type":"ContainerDied","Data":"237de703256750ec3d4c50decd0f2a6f5af55e557501cc9c09da1889b2d72d78"} Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.325926 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.430352 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwhn5\" (UniqueName: \"kubernetes.io/projected/2e6e07b6-c923-4d65-8bda-8fb27915bb72-kube-api-access-pwhn5\") pod \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.430826 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-ssh-key-openstack-edpm-ipam\") pod \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.430977 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-inventory\") pod \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\" (UID: \"2e6e07b6-c923-4d65-8bda-8fb27915bb72\") " Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.437139 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6e07b6-c923-4d65-8bda-8fb27915bb72-kube-api-access-pwhn5" (OuterVolumeSpecName: "kube-api-access-pwhn5") pod "2e6e07b6-c923-4d65-8bda-8fb27915bb72" (UID: "2e6e07b6-c923-4d65-8bda-8fb27915bb72"). InnerVolumeSpecName "kube-api-access-pwhn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.457300 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-inventory" (OuterVolumeSpecName: "inventory") pod "2e6e07b6-c923-4d65-8bda-8fb27915bb72" (UID: "2e6e07b6-c923-4d65-8bda-8fb27915bb72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.458764 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2e6e07b6-c923-4d65-8bda-8fb27915bb72" (UID: "2e6e07b6-c923-4d65-8bda-8fb27915bb72"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.534028 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwhn5\" (UniqueName: \"kubernetes.io/projected/2e6e07b6-c923-4d65-8bda-8fb27915bb72-kube-api-access-pwhn5\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.534062 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.534079 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6e07b6-c923-4d65-8bda-8fb27915bb72-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.949159 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.949231 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.986586 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" event={"ID":"2e6e07b6-c923-4d65-8bda-8fb27915bb72","Type":"ContainerDied","Data":"6f4ab114d1e6e6ed8155fca6ec0568345f587060b93c4ce7422ad92ed98308c7"} Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.986658 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f4ab114d1e6e6ed8155fca6ec0568345f587060b93c4ce7422ad92ed98308c7" Jan 12 13:33:46 crc kubenswrapper[4580]: I0112 13:33:46.986677 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.068574 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4"] Jan 12 13:33:47 crc kubenswrapper[4580]: E0112 13:33:47.069429 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6e07b6-c923-4d65-8bda-8fb27915bb72" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.069459 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6e07b6-c923-4d65-8bda-8fb27915bb72" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.069737 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6e07b6-c923-4d65-8bda-8fb27915bb72" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.070664 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.072663 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.072867 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.073161 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.073396 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.073586 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.073757 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.073878 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.074142 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.079002 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4"] Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.146642 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.146835 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.146903 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2v4k\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-kube-api-access-h2v4k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.146962 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147031 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147070 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147304 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147355 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147393 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147520 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147606 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147646 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147683 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.147738 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.249002 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.249058 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.249090 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.249160 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.249228 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.250033 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2v4k\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-kube-api-access-h2v4k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.250217 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.250597 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.250644 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.250841 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.250881 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.250927 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.251127 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.251199 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.254493 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.254516 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.255160 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.255457 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.256085 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.256595 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.256635 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.257188 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.257774 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.258442 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.258527 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.258707 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.259384 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.265481 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2v4k\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-kube-api-access-h2v4k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.387144 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.840445 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4"] Jan 12 13:33:47 crc kubenswrapper[4580]: I0112 13:33:47.997852 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" event={"ID":"bf2989c5-6b0d-458d-98c5-7849febf7787","Type":"ContainerStarted","Data":"cc367145b635b59a6c217804a66a28c3a6a8ed72aaf40e3aff5e55171f7924e2"} Jan 12 13:33:49 crc kubenswrapper[4580]: I0112 13:33:49.009335 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" event={"ID":"bf2989c5-6b0d-458d-98c5-7849febf7787","Type":"ContainerStarted","Data":"8b08455b3f91188810c01c2dc73d3ef6768dc1a8975d13305f8097272c403674"} Jan 12 13:33:49 crc kubenswrapper[4580]: I0112 13:33:49.030931 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" podStartSLOduration=1.438138925 podStartE2EDuration="2.030903361s" podCreationTimestamp="2026-01-12 13:33:47 +0000 UTC" firstStartedPulling="2026-01-12 13:33:47.853501721 +0000 UTC m=+1626.897720412" lastFinishedPulling="2026-01-12 13:33:48.446266157 +0000 UTC m=+1627.490484848" observedRunningTime="2026-01-12 13:33:49.024470396 +0000 UTC m=+1628.068689086" watchObservedRunningTime="2026-01-12 13:33:49.030903361 +0000 UTC m=+1628.075122051" Jan 12 13:33:55 crc kubenswrapper[4580]: I0112 13:33:55.849443 4580 scope.go:117] "RemoveContainer" containerID="0a05c2691e6263f292d152c6e0d296d0255c4976b965402d3451e6e505924f0a" Jan 12 13:33:55 crc kubenswrapper[4580]: I0112 13:33:55.874646 4580 scope.go:117] "RemoveContainer" containerID="6cd367a2bba62fcda97b5ecd7833b2792aa06318c524cb6b475d741ecf88d8c5" Jan 12 13:33:55 crc kubenswrapper[4580]: I0112 13:33:55.917919 4580 scope.go:117] "RemoveContainer" containerID="a0e8aaa248322bf579c964ab28970f1fdc4d4f3798e900deee1409b0f18befee" Jan 12 13:33:55 crc kubenswrapper[4580]: I0112 13:33:55.966455 4580 scope.go:117] "RemoveContainer" containerID="4931a3c4f183aa6a5a0f6b8e7d73b33bf6141d964e9d6549727ab6c7e4305eb6" Jan 12 13:33:55 crc kubenswrapper[4580]: I0112 13:33:55.992338 4580 scope.go:117] "RemoveContainer" containerID="73929f388313091b972f0847349f3c28d7a920fe1b43673b6339a6cb627a0381" Jan 12 13:33:56 crc kubenswrapper[4580]: I0112 13:33:56.020051 4580 scope.go:117] "RemoveContainer" containerID="84298754a644471a5c1aa490dff4d6db7074699de21f30bee275f88b63306e48" Jan 12 13:33:56 crc kubenswrapper[4580]: I0112 13:33:56.062994 4580 scope.go:117] "RemoveContainer" containerID="e93de2ac4109ce79d0eaa282e59f321021614bb16ff78fa90131ebb239adaa13" Jan 12 13:33:56 crc kubenswrapper[4580]: I0112 13:33:56.086775 4580 scope.go:117] "RemoveContainer" containerID="80dca3ad23ccce72018728df001c974e2e963f0c4f239e9ea80cc3d64a4924ea" Jan 12 13:33:56 crc kubenswrapper[4580]: I0112 13:33:56.107536 4580 scope.go:117] "RemoveContainer" containerID="5fbefe9e2e7271564a8b0734d36f0fca88547ad3daebb81d64310d6538b6534c" Jan 12 13:34:15 crc kubenswrapper[4580]: I0112 13:34:15.251819 4580 generic.go:334] "Generic (PLEG): container finished" podID="bf2989c5-6b0d-458d-98c5-7849febf7787" containerID="8b08455b3f91188810c01c2dc73d3ef6768dc1a8975d13305f8097272c403674" exitCode=0 Jan 12 13:34:15 crc kubenswrapper[4580]: I0112 13:34:15.251917 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" event={"ID":"bf2989c5-6b0d-458d-98c5-7849febf7787","Type":"ContainerDied","Data":"8b08455b3f91188810c01c2dc73d3ef6768dc1a8975d13305f8097272c403674"} Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.623515 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.779371 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.779474 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-telemetry-combined-ca-bundle\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.779587 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-ovn-default-certs-0\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.779630 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-inventory\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.779829 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-neutron-metadata-combined-ca-bundle\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.779905 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2v4k\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-kube-api-access-h2v4k\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.779936 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-nova-combined-ca-bundle\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.780069 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ssh-key-openstack-edpm-ipam\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.780176 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-libvirt-combined-ca-bundle\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.780204 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.780254 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-bootstrap-combined-ca-bundle\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.780293 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ovn-combined-ca-bundle\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.780335 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-repo-setup-combined-ca-bundle\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.780372 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"bf2989c5-6b0d-458d-98c5-7849febf7787\" (UID: \"bf2989c5-6b0d-458d-98c5-7849febf7787\") " Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.787387 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.788187 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.788238 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-kube-api-access-h2v4k" (OuterVolumeSpecName: "kube-api-access-h2v4k") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "kube-api-access-h2v4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.788527 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.788962 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.788999 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.791598 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.791657 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.791774 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.793005 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.793687 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.801172 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.818584 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.820842 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-inventory" (OuterVolumeSpecName: "inventory") pod "bf2989c5-6b0d-458d-98c5-7849febf7787" (UID: "bf2989c5-6b0d-458d-98c5-7849febf7787"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.884669 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.884774 4580 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.884842 4580 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.884905 4580 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.884957 4580 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885010 4580 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885065 4580 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885166 4580 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885239 4580 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885301 4580 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885354 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885407 4580 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885463 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2v4k\" (UniqueName: \"kubernetes.io/projected/bf2989c5-6b0d-458d-98c5-7849febf7787-kube-api-access-h2v4k\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.885518 4580 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2989c5-6b0d-458d-98c5-7849febf7787-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.949551 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:34:16 crc kubenswrapper[4580]: I0112 13:34:16.949594 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.271792 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" event={"ID":"bf2989c5-6b0d-458d-98c5-7849febf7787","Type":"ContainerDied","Data":"cc367145b635b59a6c217804a66a28c3a6a8ed72aaf40e3aff5e55171f7924e2"} Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.271851 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc367145b635b59a6c217804a66a28c3a6a8ed72aaf40e3aff5e55171f7924e2" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.271873 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.368875 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485"] Jan 12 13:34:17 crc kubenswrapper[4580]: E0112 13:34:17.369448 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2989c5-6b0d-458d-98c5-7849febf7787" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.369471 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2989c5-6b0d-458d-98c5-7849febf7787" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.369733 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2989c5-6b0d-458d-98c5-7849febf7787" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.370592 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.373436 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.373629 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.373666 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.373780 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.373877 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.378223 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485"] Jan 12 13:34:17 crc kubenswrapper[4580]: E0112 13:34:17.454499 4580 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf2989c5_6b0d_458d_98c5_7849febf7787.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf2989c5_6b0d_458d_98c5_7849febf7787.slice/crio-cc367145b635b59a6c217804a66a28c3a6a8ed72aaf40e3aff5e55171f7924e2\": RecentStats: unable to find data in memory cache]" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.495904 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.496003 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06c2f69b-a49e-42fb-9532-837b04bdff07-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.496040 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.496076 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.496095 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm45b\" (UniqueName: \"kubernetes.io/projected/06c2f69b-a49e-42fb-9532-837b04bdff07-kube-api-access-xm45b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.597979 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.598278 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06c2f69b-a49e-42fb-9532-837b04bdff07-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.598315 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.598344 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.598363 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm45b\" (UniqueName: \"kubernetes.io/projected/06c2f69b-a49e-42fb-9532-837b04bdff07-kube-api-access-xm45b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.599525 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06c2f69b-a49e-42fb-9532-837b04bdff07-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.602455 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.603365 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.604005 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.613091 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm45b\" (UniqueName: \"kubernetes.io/projected/06c2f69b-a49e-42fb-9532-837b04bdff07-kube-api-access-xm45b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-74485\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:17 crc kubenswrapper[4580]: I0112 13:34:17.688153 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:34:18 crc kubenswrapper[4580]: I0112 13:34:18.177773 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485"] Jan 12 13:34:18 crc kubenswrapper[4580]: I0112 13:34:18.282425 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" event={"ID":"06c2f69b-a49e-42fb-9532-837b04bdff07","Type":"ContainerStarted","Data":"05dd443558ff952f0ebb5a4c26644497d7ac1983a5e51f6d03c557e1878b6c26"} Jan 12 13:34:19 crc kubenswrapper[4580]: I0112 13:34:19.293870 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" event={"ID":"06c2f69b-a49e-42fb-9532-837b04bdff07","Type":"ContainerStarted","Data":"2ef623e684652488b8a2a258c53e6a3ce0f7ba97049c54dc0a2ca8b19fe4a4aa"} Jan 12 13:34:19 crc kubenswrapper[4580]: I0112 13:34:19.316858 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" podStartSLOduration=1.691412307 podStartE2EDuration="2.316843774s" podCreationTimestamp="2026-01-12 13:34:17 +0000 UTC" firstStartedPulling="2026-01-12 13:34:18.181926464 +0000 UTC m=+1657.226145153" lastFinishedPulling="2026-01-12 13:34:18.807357931 +0000 UTC m=+1657.851576620" observedRunningTime="2026-01-12 13:34:19.309716623 +0000 UTC m=+1658.353935313" watchObservedRunningTime="2026-01-12 13:34:19.316843774 +0000 UTC m=+1658.361062464" Jan 12 13:34:28 crc kubenswrapper[4580]: I0112 13:34:28.042009 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-l68ww"] Jan 12 13:34:28 crc kubenswrapper[4580]: I0112 13:34:28.048806 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-l68ww"] Jan 12 13:34:29 crc kubenswrapper[4580]: I0112 13:34:29.290136 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f59d6742-fed6-4732-9cde-29dc74e47db0" path="/var/lib/kubelet/pods/f59d6742-fed6-4732-9cde-29dc74e47db0/volumes" Jan 12 13:34:46 crc kubenswrapper[4580]: I0112 13:34:46.949874 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:34:46 crc kubenswrapper[4580]: I0112 13:34:46.950602 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:34:46 crc kubenswrapper[4580]: I0112 13:34:46.950657 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:34:46 crc kubenswrapper[4580]: I0112 13:34:46.951654 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:34:46 crc kubenswrapper[4580]: I0112 13:34:46.951718 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" gracePeriod=600 Jan 12 13:34:47 crc kubenswrapper[4580]: E0112 13:34:47.069983 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:34:47 crc kubenswrapper[4580]: I0112 13:34:47.519865 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" exitCode=0 Jan 12 13:34:47 crc kubenswrapper[4580]: I0112 13:34:47.519928 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1"} Jan 12 13:34:47 crc kubenswrapper[4580]: I0112 13:34:47.520325 4580 scope.go:117] "RemoveContainer" containerID="e7c14f11ee163df37acee7c0a47ee3b8e21b57ddc7953c9ff079f4afda394d2b" Jan 12 13:34:47 crc kubenswrapper[4580]: I0112 13:34:47.520840 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:34:47 crc kubenswrapper[4580]: E0112 13:34:47.521240 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:34:56 crc kubenswrapper[4580]: I0112 13:34:56.283673 4580 scope.go:117] "RemoveContainer" containerID="8f7295b81ba9cf2ace0c7d22ce6150d4bf072b15a2eeaff67c2fa09828517fcc" Jan 12 13:35:02 crc kubenswrapper[4580]: I0112 13:35:02.282439 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:35:02 crc kubenswrapper[4580]: E0112 13:35:02.283437 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:35:03 crc kubenswrapper[4580]: I0112 13:35:03.661637 4580 generic.go:334] "Generic (PLEG): container finished" podID="06c2f69b-a49e-42fb-9532-837b04bdff07" containerID="2ef623e684652488b8a2a258c53e6a3ce0f7ba97049c54dc0a2ca8b19fe4a4aa" exitCode=0 Jan 12 13:35:03 crc kubenswrapper[4580]: I0112 13:35:03.661715 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" event={"ID":"06c2f69b-a49e-42fb-9532-837b04bdff07","Type":"ContainerDied","Data":"2ef623e684652488b8a2a258c53e6a3ce0f7ba97049c54dc0a2ca8b19fe4a4aa"} Jan 12 13:35:04 crc kubenswrapper[4580]: I0112 13:35:04.959777 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.009092 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm45b\" (UniqueName: \"kubernetes.io/projected/06c2f69b-a49e-42fb-9532-837b04bdff07-kube-api-access-xm45b\") pod \"06c2f69b-a49e-42fb-9532-837b04bdff07\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.009169 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06c2f69b-a49e-42fb-9532-837b04bdff07-ovncontroller-config-0\") pod \"06c2f69b-a49e-42fb-9532-837b04bdff07\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.009256 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ssh-key-openstack-edpm-ipam\") pod \"06c2f69b-a49e-42fb-9532-837b04bdff07\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.009292 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ovn-combined-ca-bundle\") pod \"06c2f69b-a49e-42fb-9532-837b04bdff07\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.009408 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-inventory\") pod \"06c2f69b-a49e-42fb-9532-837b04bdff07\" (UID: \"06c2f69b-a49e-42fb-9532-837b04bdff07\") " Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.015302 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c2f69b-a49e-42fb-9532-837b04bdff07-kube-api-access-xm45b" (OuterVolumeSpecName: "kube-api-access-xm45b") pod "06c2f69b-a49e-42fb-9532-837b04bdff07" (UID: "06c2f69b-a49e-42fb-9532-837b04bdff07"). InnerVolumeSpecName "kube-api-access-xm45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.015414 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "06c2f69b-a49e-42fb-9532-837b04bdff07" (UID: "06c2f69b-a49e-42fb-9532-837b04bdff07"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.033045 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "06c2f69b-a49e-42fb-9532-837b04bdff07" (UID: "06c2f69b-a49e-42fb-9532-837b04bdff07"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.033423 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-inventory" (OuterVolumeSpecName: "inventory") pod "06c2f69b-a49e-42fb-9532-837b04bdff07" (UID: "06c2f69b-a49e-42fb-9532-837b04bdff07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.036342 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06c2f69b-a49e-42fb-9532-837b04bdff07-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "06c2f69b-a49e-42fb-9532-837b04bdff07" (UID: "06c2f69b-a49e-42fb-9532-837b04bdff07"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.111350 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm45b\" (UniqueName: \"kubernetes.io/projected/06c2f69b-a49e-42fb-9532-837b04bdff07-kube-api-access-xm45b\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.111380 4580 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06c2f69b-a49e-42fb-9532-837b04bdff07-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.111393 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.111405 4580 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.111416 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06c2f69b-a49e-42fb-9532-837b04bdff07-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.677275 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" event={"ID":"06c2f69b-a49e-42fb-9532-837b04bdff07","Type":"ContainerDied","Data":"05dd443558ff952f0ebb5a4c26644497d7ac1983a5e51f6d03c557e1878b6c26"} Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.677532 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05dd443558ff952f0ebb5a4c26644497d7ac1983a5e51f6d03c557e1878b6c26" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.677308 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-74485" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.747458 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t"] Jan 12 13:35:05 crc kubenswrapper[4580]: E0112 13:35:05.747860 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c2f69b-a49e-42fb-9532-837b04bdff07" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.747881 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c2f69b-a49e-42fb-9532-837b04bdff07" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.748079 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c2f69b-a49e-42fb-9532-837b04bdff07" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.748682 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.750018 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.750028 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.750293 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.750314 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.751085 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.751844 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.755942 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t"] Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.826139 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.826189 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.826237 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwss\" (UniqueName: \"kubernetes.io/projected/6824de1f-1f07-45a9-b65d-6d1aadc863db-kube-api-access-bgwss\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.826954 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.827030 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.827138 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.928539 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.928622 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.928653 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.928701 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgwss\" (UniqueName: \"kubernetes.io/projected/6824de1f-1f07-45a9-b65d-6d1aadc863db-kube-api-access-bgwss\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.928766 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.928804 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.933009 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.933444 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.933717 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.933739 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.935123 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:05 crc kubenswrapper[4580]: I0112 13:35:05.946697 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgwss\" (UniqueName: \"kubernetes.io/projected/6824de1f-1f07-45a9-b65d-6d1aadc863db-kube-api-access-bgwss\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:06 crc kubenswrapper[4580]: I0112 13:35:06.068914 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:06 crc kubenswrapper[4580]: I0112 13:35:06.524388 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t"] Jan 12 13:35:06 crc kubenswrapper[4580]: I0112 13:35:06.687086 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" event={"ID":"6824de1f-1f07-45a9-b65d-6d1aadc863db","Type":"ContainerStarted","Data":"e4734dcd234eec2b973a847facfa56dd3a5e72dcf008f8c4cb608ed45bb61e5e"} Jan 12 13:35:07 crc kubenswrapper[4580]: I0112 13:35:07.700136 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" event={"ID":"6824de1f-1f07-45a9-b65d-6d1aadc863db","Type":"ContainerStarted","Data":"290d9c62e8e068279fc18a7bea5251464f980963d8bbbf62d24570dc12404aa5"} Jan 12 13:35:07 crc kubenswrapper[4580]: I0112 13:35:07.724446 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" podStartSLOduration=2.078936534 podStartE2EDuration="2.724423829s" podCreationTimestamp="2026-01-12 13:35:05 +0000 UTC" firstStartedPulling="2026-01-12 13:35:06.530852076 +0000 UTC m=+1705.575070766" lastFinishedPulling="2026-01-12 13:35:07.176339371 +0000 UTC m=+1706.220558061" observedRunningTime="2026-01-12 13:35:07.718273525 +0000 UTC m=+1706.762492204" watchObservedRunningTime="2026-01-12 13:35:07.724423829 +0000 UTC m=+1706.768642519" Jan 12 13:35:16 crc kubenswrapper[4580]: I0112 13:35:16.281865 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:35:16 crc kubenswrapper[4580]: E0112 13:35:16.282587 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:35:30 crc kubenswrapper[4580]: I0112 13:35:30.282611 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:35:30 crc kubenswrapper[4580]: E0112 13:35:30.283832 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:35:40 crc kubenswrapper[4580]: I0112 13:35:40.967708 4580 generic.go:334] "Generic (PLEG): container finished" podID="6824de1f-1f07-45a9-b65d-6d1aadc863db" containerID="290d9c62e8e068279fc18a7bea5251464f980963d8bbbf62d24570dc12404aa5" exitCode=0 Jan 12 13:35:40 crc kubenswrapper[4580]: I0112 13:35:40.967801 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" event={"ID":"6824de1f-1f07-45a9-b65d-6d1aadc863db","Type":"ContainerDied","Data":"290d9c62e8e068279fc18a7bea5251464f980963d8bbbf62d24570dc12404aa5"} Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.282547 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:35:42 crc kubenswrapper[4580]: E0112 13:35:42.283114 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.321039 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.486165 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-inventory\") pod \"6824de1f-1f07-45a9-b65d-6d1aadc863db\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.486224 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-nova-metadata-neutron-config-0\") pod \"6824de1f-1f07-45a9-b65d-6d1aadc863db\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.486252 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-ssh-key-openstack-edpm-ipam\") pod \"6824de1f-1f07-45a9-b65d-6d1aadc863db\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.486275 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgwss\" (UniqueName: \"kubernetes.io/projected/6824de1f-1f07-45a9-b65d-6d1aadc863db-kube-api-access-bgwss\") pod \"6824de1f-1f07-45a9-b65d-6d1aadc863db\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.486300 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-metadata-combined-ca-bundle\") pod \"6824de1f-1f07-45a9-b65d-6d1aadc863db\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.486332 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6824de1f-1f07-45a9-b65d-6d1aadc863db\" (UID: \"6824de1f-1f07-45a9-b65d-6d1aadc863db\") " Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.491871 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6824de1f-1f07-45a9-b65d-6d1aadc863db" (UID: "6824de1f-1f07-45a9-b65d-6d1aadc863db"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.492280 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6824de1f-1f07-45a9-b65d-6d1aadc863db-kube-api-access-bgwss" (OuterVolumeSpecName: "kube-api-access-bgwss") pod "6824de1f-1f07-45a9-b65d-6d1aadc863db" (UID: "6824de1f-1f07-45a9-b65d-6d1aadc863db"). InnerVolumeSpecName "kube-api-access-bgwss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.508826 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-inventory" (OuterVolumeSpecName: "inventory") pod "6824de1f-1f07-45a9-b65d-6d1aadc863db" (UID: "6824de1f-1f07-45a9-b65d-6d1aadc863db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.509256 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6824de1f-1f07-45a9-b65d-6d1aadc863db" (UID: "6824de1f-1f07-45a9-b65d-6d1aadc863db"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.509552 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6824de1f-1f07-45a9-b65d-6d1aadc863db" (UID: "6824de1f-1f07-45a9-b65d-6d1aadc863db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.510963 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6824de1f-1f07-45a9-b65d-6d1aadc863db" (UID: "6824de1f-1f07-45a9-b65d-6d1aadc863db"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.589486 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.589713 4580 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.589777 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.589835 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgwss\" (UniqueName: \"kubernetes.io/projected/6824de1f-1f07-45a9-b65d-6d1aadc863db-kube-api-access-bgwss\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.589911 4580 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.589971 4580 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6824de1f-1f07-45a9-b65d-6d1aadc863db-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.987253 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" event={"ID":"6824de1f-1f07-45a9-b65d-6d1aadc863db","Type":"ContainerDied","Data":"e4734dcd234eec2b973a847facfa56dd3a5e72dcf008f8c4cb608ed45bb61e5e"} Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.987313 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4734dcd234eec2b973a847facfa56dd3a5e72dcf008f8c4cb608ed45bb61e5e" Jan 12 13:35:42 crc kubenswrapper[4580]: I0112 13:35:42.987575 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.152227 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r"] Jan 12 13:35:43 crc kubenswrapper[4580]: E0112 13:35:43.152867 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6824de1f-1f07-45a9-b65d-6d1aadc863db" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.152887 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="6824de1f-1f07-45a9-b65d-6d1aadc863db" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.153078 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="6824de1f-1f07-45a9-b65d-6d1aadc863db" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.153706 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.156350 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.158042 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.158166 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.158168 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.174551 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.190265 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r"] Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.307803 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwrb\" (UniqueName: \"kubernetes.io/projected/7537a508-8a6d-43df-8d76-a845464edfa9-kube-api-access-hvwrb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.307861 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.307936 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.308009 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.308325 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.410582 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwrb\" (UniqueName: \"kubernetes.io/projected/7537a508-8a6d-43df-8d76-a845464edfa9-kube-api-access-hvwrb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.410646 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.410730 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.410781 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.410899 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.416743 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.416744 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.417093 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.417617 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.427308 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwrb\" (UniqueName: \"kubernetes.io/projected/7537a508-8a6d-43df-8d76-a845464edfa9-kube-api-access-hvwrb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-krb6r\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.469509 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.949553 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r"] Jan 12 13:35:43 crc kubenswrapper[4580]: I0112 13:35:43.994850 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" event={"ID":"7537a508-8a6d-43df-8d76-a845464edfa9","Type":"ContainerStarted","Data":"a757ad39e3bda840d0b4ec53fcaccc22b62dd6e288ea31626755af23a87a5c69"} Jan 12 13:35:45 crc kubenswrapper[4580]: I0112 13:35:45.004460 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" event={"ID":"7537a508-8a6d-43df-8d76-a845464edfa9","Type":"ContainerStarted","Data":"d6a2ccf589f03ad0b40e8cd345c49bdd09eea06bfb1729b558bf97cf56ba7e3c"} Jan 12 13:35:45 crc kubenswrapper[4580]: I0112 13:35:45.022882 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" podStartSLOduration=1.530720701 podStartE2EDuration="2.022853027s" podCreationTimestamp="2026-01-12 13:35:43 +0000 UTC" firstStartedPulling="2026-01-12 13:35:43.940894411 +0000 UTC m=+1742.985113101" lastFinishedPulling="2026-01-12 13:35:44.433026727 +0000 UTC m=+1743.477245427" observedRunningTime="2026-01-12 13:35:45.019610501 +0000 UTC m=+1744.063829191" watchObservedRunningTime="2026-01-12 13:35:45.022853027 +0000 UTC m=+1744.067071717" Jan 12 13:35:57 crc kubenswrapper[4580]: I0112 13:35:57.281843 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:35:57 crc kubenswrapper[4580]: E0112 13:35:57.282589 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:36:12 crc kubenswrapper[4580]: I0112 13:36:12.282322 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:36:12 crc kubenswrapper[4580]: E0112 13:36:12.283580 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:36:23 crc kubenswrapper[4580]: I0112 13:36:23.282161 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:36:23 crc kubenswrapper[4580]: E0112 13:36:23.283406 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:36:35 crc kubenswrapper[4580]: I0112 13:36:35.281393 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:36:35 crc kubenswrapper[4580]: E0112 13:36:35.282399 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:36:50 crc kubenswrapper[4580]: I0112 13:36:50.282236 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:36:50 crc kubenswrapper[4580]: E0112 13:36:50.283271 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:37:02 crc kubenswrapper[4580]: I0112 13:37:02.281466 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:37:02 crc kubenswrapper[4580]: E0112 13:37:02.282425 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:37:16 crc kubenswrapper[4580]: I0112 13:37:16.281942 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:37:16 crc kubenswrapper[4580]: E0112 13:37:16.282914 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:37:29 crc kubenswrapper[4580]: I0112 13:37:29.283420 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:37:29 crc kubenswrapper[4580]: E0112 13:37:29.284408 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:37:43 crc kubenswrapper[4580]: I0112 13:37:43.282058 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:37:43 crc kubenswrapper[4580]: E0112 13:37:43.282710 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:37:58 crc kubenswrapper[4580]: I0112 13:37:58.282284 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:37:58 crc kubenswrapper[4580]: E0112 13:37:58.283232 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:38:13 crc kubenswrapper[4580]: I0112 13:38:13.282671 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:38:13 crc kubenswrapper[4580]: E0112 13:38:13.283658 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:38:26 crc kubenswrapper[4580]: I0112 13:38:26.282074 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:38:26 crc kubenswrapper[4580]: E0112 13:38:26.283007 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:38:33 crc kubenswrapper[4580]: I0112 13:38:33.334498 4580 generic.go:334] "Generic (PLEG): container finished" podID="7537a508-8a6d-43df-8d76-a845464edfa9" containerID="d6a2ccf589f03ad0b40e8cd345c49bdd09eea06bfb1729b558bf97cf56ba7e3c" exitCode=0 Jan 12 13:38:33 crc kubenswrapper[4580]: I0112 13:38:33.334566 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" event={"ID":"7537a508-8a6d-43df-8d76-a845464edfa9","Type":"ContainerDied","Data":"d6a2ccf589f03ad0b40e8cd345c49bdd09eea06bfb1729b558bf97cf56ba7e3c"} Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.653217 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.698525 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvwrb\" (UniqueName: \"kubernetes.io/projected/7537a508-8a6d-43df-8d76-a845464edfa9-kube-api-access-hvwrb\") pod \"7537a508-8a6d-43df-8d76-a845464edfa9\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.698578 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-secret-0\") pod \"7537a508-8a6d-43df-8d76-a845464edfa9\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.698654 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-combined-ca-bundle\") pod \"7537a508-8a6d-43df-8d76-a845464edfa9\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.698727 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-ssh-key-openstack-edpm-ipam\") pod \"7537a508-8a6d-43df-8d76-a845464edfa9\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.698786 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-inventory\") pod \"7537a508-8a6d-43df-8d76-a845464edfa9\" (UID: \"7537a508-8a6d-43df-8d76-a845464edfa9\") " Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.703726 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7537a508-8a6d-43df-8d76-a845464edfa9" (UID: "7537a508-8a6d-43df-8d76-a845464edfa9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.703759 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7537a508-8a6d-43df-8d76-a845464edfa9-kube-api-access-hvwrb" (OuterVolumeSpecName: "kube-api-access-hvwrb") pod "7537a508-8a6d-43df-8d76-a845464edfa9" (UID: "7537a508-8a6d-43df-8d76-a845464edfa9"). InnerVolumeSpecName "kube-api-access-hvwrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.721163 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7537a508-8a6d-43df-8d76-a845464edfa9" (UID: "7537a508-8a6d-43df-8d76-a845464edfa9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.721488 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-inventory" (OuterVolumeSpecName: "inventory") pod "7537a508-8a6d-43df-8d76-a845464edfa9" (UID: "7537a508-8a6d-43df-8d76-a845464edfa9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.722252 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7537a508-8a6d-43df-8d76-a845464edfa9" (UID: "7537a508-8a6d-43df-8d76-a845464edfa9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.800830 4580 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.800932 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.800997 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.801055 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvwrb\" (UniqueName: \"kubernetes.io/projected/7537a508-8a6d-43df-8d76-a845464edfa9-kube-api-access-hvwrb\") on node \"crc\" DevicePath \"\"" Jan 12 13:38:34 crc kubenswrapper[4580]: I0112 13:38:34.801122 4580 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7537a508-8a6d-43df-8d76-a845464edfa9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.352741 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" event={"ID":"7537a508-8a6d-43df-8d76-a845464edfa9","Type":"ContainerDied","Data":"a757ad39e3bda840d0b4ec53fcaccc22b62dd6e288ea31626755af23a87a5c69"} Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.353091 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a757ad39e3bda840d0b4ec53fcaccc22b62dd6e288ea31626755af23a87a5c69" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.352791 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-krb6r" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.421161 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd"] Jan 12 13:38:35 crc kubenswrapper[4580]: E0112 13:38:35.421565 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7537a508-8a6d-43df-8d76-a845464edfa9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.421586 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="7537a508-8a6d-43df-8d76-a845464edfa9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.421805 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="7537a508-8a6d-43df-8d76-a845464edfa9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.422495 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.432646 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.433068 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.433211 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.433260 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.433224 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.433432 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.433787 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd"] Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.433937 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.512715 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.512819 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.512912 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.512956 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.512977 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.512992 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.513032 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.513068 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.513088 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njmhf\" (UniqueName: \"kubernetes.io/projected/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-kube-api-access-njmhf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615422 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615568 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615629 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615659 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615678 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615708 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615755 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615784 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njmhf\" (UniqueName: \"kubernetes.io/projected/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-kube-api-access-njmhf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.615818 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.616789 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.619165 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.619342 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.619479 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.619525 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.619558 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.620020 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.620617 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.630989 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njmhf\" (UniqueName: \"kubernetes.io/projected/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-kube-api-access-njmhf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pdcpd\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:35 crc kubenswrapper[4580]: I0112 13:38:35.735955 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:38:36 crc kubenswrapper[4580]: I0112 13:38:36.191735 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd"] Jan 12 13:38:36 crc kubenswrapper[4580]: I0112 13:38:36.195621 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:38:36 crc kubenswrapper[4580]: I0112 13:38:36.361339 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" event={"ID":"2b14f1aa-0c20-4db8-9a42-8abf7baf0140","Type":"ContainerStarted","Data":"627138834fe72775d9aef6f43e619a5f920539723326f0caa82a1094fc3fcc46"} Jan 12 13:38:37 crc kubenswrapper[4580]: I0112 13:38:37.371558 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" event={"ID":"2b14f1aa-0c20-4db8-9a42-8abf7baf0140","Type":"ContainerStarted","Data":"dab72ffbb9046b3215b0dc6acee485e1ec9eb250e5e93f78ea377f1e1714df85"} Jan 12 13:38:37 crc kubenswrapper[4580]: I0112 13:38:37.401902 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" podStartSLOduration=1.565803361 podStartE2EDuration="2.401881316s" podCreationTimestamp="2026-01-12 13:38:35 +0000 UTC" firstStartedPulling="2026-01-12 13:38:36.195411556 +0000 UTC m=+1915.239630246" lastFinishedPulling="2026-01-12 13:38:37.031489511 +0000 UTC m=+1916.075708201" observedRunningTime="2026-01-12 13:38:37.39199171 +0000 UTC m=+1916.436210400" watchObservedRunningTime="2026-01-12 13:38:37.401881316 +0000 UTC m=+1916.446100006" Jan 12 13:38:39 crc kubenswrapper[4580]: I0112 13:38:39.281985 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:38:39 crc kubenswrapper[4580]: E0112 13:38:39.282588 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:38:51 crc kubenswrapper[4580]: I0112 13:38:51.286579 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:38:51 crc kubenswrapper[4580]: E0112 13:38:51.287396 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:39:06 crc kubenswrapper[4580]: I0112 13:39:06.282190 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:39:06 crc kubenswrapper[4580]: E0112 13:39:06.282873 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:39:19 crc kubenswrapper[4580]: I0112 13:39:19.282787 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:39:19 crc kubenswrapper[4580]: E0112 13:39:19.285165 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:39:30 crc kubenswrapper[4580]: I0112 13:39:30.282433 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:39:30 crc kubenswrapper[4580]: E0112 13:39:30.283500 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:39:45 crc kubenswrapper[4580]: I0112 13:39:45.282144 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:39:45 crc kubenswrapper[4580]: E0112 13:39:45.283076 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:39:58 crc kubenswrapper[4580]: I0112 13:39:58.282941 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:39:59 crc kubenswrapper[4580]: I0112 13:39:59.038083 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"fcfabc6cf0a38065a0248083d6b03cb83a27c1814f0aa02c26308ff07404d5b7"} Jan 12 13:40:13 crc kubenswrapper[4580]: I0112 13:40:13.173791 4580 generic.go:334] "Generic (PLEG): container finished" podID="2b14f1aa-0c20-4db8-9a42-8abf7baf0140" containerID="dab72ffbb9046b3215b0dc6acee485e1ec9eb250e5e93f78ea377f1e1714df85" exitCode=0 Jan 12 13:40:13 crc kubenswrapper[4580]: I0112 13:40:13.173896 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" event={"ID":"2b14f1aa-0c20-4db8-9a42-8abf7baf0140","Type":"ContainerDied","Data":"dab72ffbb9046b3215b0dc6acee485e1ec9eb250e5e93f78ea377f1e1714df85"} Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.604523 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.687147 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-0\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.687228 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-1\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.687259 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-combined-ca-bundle\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.687933 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-0\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.687987 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-1\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.688011 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-ssh-key-openstack-edpm-ipam\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.688029 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-extra-config-0\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.688056 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njmhf\" (UniqueName: \"kubernetes.io/projected/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-kube-api-access-njmhf\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.688080 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-inventory\") pod \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\" (UID: \"2b14f1aa-0c20-4db8-9a42-8abf7baf0140\") " Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.692732 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-kube-api-access-njmhf" (OuterVolumeSpecName: "kube-api-access-njmhf") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "kube-api-access-njmhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.706187 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.709749 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.710084 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.710726 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.711836 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.713945 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.714970 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.719470 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-inventory" (OuterVolumeSpecName: "inventory") pod "2b14f1aa-0c20-4db8-9a42-8abf7baf0140" (UID: "2b14f1aa-0c20-4db8-9a42-8abf7baf0140"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790432 4580 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790520 4580 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790596 4580 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790680 4580 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790735 4580 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790791 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790841 4580 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790899 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njmhf\" (UniqueName: \"kubernetes.io/projected/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-kube-api-access-njmhf\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:14 crc kubenswrapper[4580]: I0112 13:40:14.790971 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b14f1aa-0c20-4db8-9a42-8abf7baf0140-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.192239 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" event={"ID":"2b14f1aa-0c20-4db8-9a42-8abf7baf0140","Type":"ContainerDied","Data":"627138834fe72775d9aef6f43e619a5f920539723326f0caa82a1094fc3fcc46"} Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.192297 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pdcpd" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.192306 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627138834fe72775d9aef6f43e619a5f920539723326f0caa82a1094fc3fcc46" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.371187 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv"] Jan 12 13:40:15 crc kubenswrapper[4580]: E0112 13:40:15.371947 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b14f1aa-0c20-4db8-9a42-8abf7baf0140" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.371969 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b14f1aa-0c20-4db8-9a42-8abf7baf0140" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.372205 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b14f1aa-0c20-4db8-9a42-8abf7baf0140" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.372882 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.374868 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.375027 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hm8xh" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.375952 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.376173 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.376346 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.384017 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv"] Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.506273 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.506375 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.506407 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.506462 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.506569 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.506599 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.506636 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrg6\" (UniqueName: \"kubernetes.io/projected/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-kube-api-access-jxrg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.608902 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxrg6\" (UniqueName: \"kubernetes.io/projected/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-kube-api-access-jxrg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.609002 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.609062 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.609093 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.609169 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.609442 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.609472 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.613673 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.614195 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.614825 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.615396 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.615763 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.616810 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.624773 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxrg6\" (UniqueName: \"kubernetes.io/projected/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-kube-api-access-jxrg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:15 crc kubenswrapper[4580]: I0112 13:40:15.688269 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:40:16 crc kubenswrapper[4580]: I0112 13:40:16.158589 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv"] Jan 12 13:40:16 crc kubenswrapper[4580]: W0112 13:40:16.162636 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e5c1e6d_1fc0_4199_ae0d_67c093f94192.slice/crio-8d4a896b1daad52fe9c7dcac358fe05b1e11a82b3c757326b3cd6f202057859f WatchSource:0}: Error finding container 8d4a896b1daad52fe9c7dcac358fe05b1e11a82b3c757326b3cd6f202057859f: Status 404 returned error can't find the container with id 8d4a896b1daad52fe9c7dcac358fe05b1e11a82b3c757326b3cd6f202057859f Jan 12 13:40:16 crc kubenswrapper[4580]: I0112 13:40:16.200578 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" event={"ID":"1e5c1e6d-1fc0-4199-ae0d-67c093f94192","Type":"ContainerStarted","Data":"8d4a896b1daad52fe9c7dcac358fe05b1e11a82b3c757326b3cd6f202057859f"} Jan 12 13:40:17 crc kubenswrapper[4580]: I0112 13:40:17.210390 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" event={"ID":"1e5c1e6d-1fc0-4199-ae0d-67c093f94192","Type":"ContainerStarted","Data":"31336af2c6b985796e0baa4429c3170c816b11ca5403a571466728e5480d15b4"} Jan 12 13:40:17 crc kubenswrapper[4580]: I0112 13:40:17.228774 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" podStartSLOduration=1.6735503280000001 podStartE2EDuration="2.228750015s" podCreationTimestamp="2026-01-12 13:40:15 +0000 UTC" firstStartedPulling="2026-01-12 13:40:16.165584077 +0000 UTC m=+2015.209802767" lastFinishedPulling="2026-01-12 13:40:16.720783774 +0000 UTC m=+2015.765002454" observedRunningTime="2026-01-12 13:40:17.225040451 +0000 UTC m=+2016.269259141" watchObservedRunningTime="2026-01-12 13:40:17.228750015 +0000 UTC m=+2016.272968705" Jan 12 13:42:04 crc kubenswrapper[4580]: I0112 13:42:04.140285 4580 generic.go:334] "Generic (PLEG): container finished" podID="1e5c1e6d-1fc0-4199-ae0d-67c093f94192" containerID="31336af2c6b985796e0baa4429c3170c816b11ca5403a571466728e5480d15b4" exitCode=0 Jan 12 13:42:04 crc kubenswrapper[4580]: I0112 13:42:04.140383 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" event={"ID":"1e5c1e6d-1fc0-4199-ae0d-67c093f94192","Type":"ContainerDied","Data":"31336af2c6b985796e0baa4429c3170c816b11ca5403a571466728e5480d15b4"} Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.612553 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.711450 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-2\") pod \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.711523 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ssh-key-openstack-edpm-ipam\") pod \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.711591 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-1\") pod \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.711632 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxrg6\" (UniqueName: \"kubernetes.io/projected/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-kube-api-access-jxrg6\") pod \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.711664 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-0\") pod \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.711722 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-telemetry-combined-ca-bundle\") pod \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.711753 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-inventory\") pod \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\" (UID: \"1e5c1e6d-1fc0-4199-ae0d-67c093f94192\") " Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.718604 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-kube-api-access-jxrg6" (OuterVolumeSpecName: "kube-api-access-jxrg6") pod "1e5c1e6d-1fc0-4199-ae0d-67c093f94192" (UID: "1e5c1e6d-1fc0-4199-ae0d-67c093f94192"). InnerVolumeSpecName "kube-api-access-jxrg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.718804 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1e5c1e6d-1fc0-4199-ae0d-67c093f94192" (UID: "1e5c1e6d-1fc0-4199-ae0d-67c093f94192"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.738538 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "1e5c1e6d-1fc0-4199-ae0d-67c093f94192" (UID: "1e5c1e6d-1fc0-4199-ae0d-67c093f94192"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.739207 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e5c1e6d-1fc0-4199-ae0d-67c093f94192" (UID: "1e5c1e6d-1fc0-4199-ae0d-67c093f94192"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.740509 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-inventory" (OuterVolumeSpecName: "inventory") pod "1e5c1e6d-1fc0-4199-ae0d-67c093f94192" (UID: "1e5c1e6d-1fc0-4199-ae0d-67c093f94192"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.742553 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "1e5c1e6d-1fc0-4199-ae0d-67c093f94192" (UID: "1e5c1e6d-1fc0-4199-ae0d-67c093f94192"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.744968 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "1e5c1e6d-1fc0-4199-ae0d-67c093f94192" (UID: "1e5c1e6d-1fc0-4199-ae0d-67c093f94192"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.813706 4580 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-inventory\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.813736 4580 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.813749 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.813762 4580 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.813773 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxrg6\" (UniqueName: \"kubernetes.io/projected/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-kube-api-access-jxrg6\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.813784 4580 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:05 crc kubenswrapper[4580]: I0112 13:42:05.813794 4580 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5c1e6d-1fc0-4199-ae0d-67c093f94192-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.125178 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4s7gj"] Jan 12 13:42:06 crc kubenswrapper[4580]: E0112 13:42:06.125550 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5c1e6d-1fc0-4199-ae0d-67c093f94192" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.125568 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5c1e6d-1fc0-4199-ae0d-67c093f94192" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.125759 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5c1e6d-1fc0-4199-ae0d-67c093f94192" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.127004 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.138461 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4s7gj"] Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.158770 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" event={"ID":"1e5c1e6d-1fc0-4199-ae0d-67c093f94192","Type":"ContainerDied","Data":"8d4a896b1daad52fe9c7dcac358fe05b1e11a82b3c757326b3cd6f202057859f"} Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.158803 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d4a896b1daad52fe9c7dcac358fe05b1e11a82b3c757326b3cd6f202057859f" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.158841 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.220529 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-utilities\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.220768 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fzm\" (UniqueName: \"kubernetes.io/projected/ab4fb44a-b105-45c6-a15d-9016387050d0-kube-api-access-x8fzm\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.221030 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-catalog-content\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.322495 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-utilities\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.322555 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8fzm\" (UniqueName: \"kubernetes.io/projected/ab4fb44a-b105-45c6-a15d-9016387050d0-kube-api-access-x8fzm\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.322677 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-catalog-content\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.324340 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-utilities\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.324408 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-catalog-content\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.340043 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8fzm\" (UniqueName: \"kubernetes.io/projected/ab4fb44a-b105-45c6-a15d-9016387050d0-kube-api-access-x8fzm\") pod \"certified-operators-4s7gj\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.441531 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:06 crc kubenswrapper[4580]: I0112 13:42:06.934218 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4s7gj"] Jan 12 13:42:07 crc kubenswrapper[4580]: I0112 13:42:07.169642 4580 generic.go:334] "Generic (PLEG): container finished" podID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerID="ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953" exitCode=0 Jan 12 13:42:07 crc kubenswrapper[4580]: I0112 13:42:07.169922 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s7gj" event={"ID":"ab4fb44a-b105-45c6-a15d-9016387050d0","Type":"ContainerDied","Data":"ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953"} Jan 12 13:42:07 crc kubenswrapper[4580]: I0112 13:42:07.169954 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s7gj" event={"ID":"ab4fb44a-b105-45c6-a15d-9016387050d0","Type":"ContainerStarted","Data":"b9be37f23a11553decb54a7dbaba6f499fc5950274f4e7f92c0be9fcd7205d61"} Jan 12 13:42:08 crc kubenswrapper[4580]: I0112 13:42:08.184809 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s7gj" event={"ID":"ab4fb44a-b105-45c6-a15d-9016387050d0","Type":"ContainerStarted","Data":"fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500"} Jan 12 13:42:09 crc kubenswrapper[4580]: I0112 13:42:09.196912 4580 generic.go:334] "Generic (PLEG): container finished" podID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerID="fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500" exitCode=0 Jan 12 13:42:09 crc kubenswrapper[4580]: I0112 13:42:09.197033 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s7gj" event={"ID":"ab4fb44a-b105-45c6-a15d-9016387050d0","Type":"ContainerDied","Data":"fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500"} Jan 12 13:42:10 crc kubenswrapper[4580]: I0112 13:42:10.213321 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s7gj" event={"ID":"ab4fb44a-b105-45c6-a15d-9016387050d0","Type":"ContainerStarted","Data":"bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2"} Jan 12 13:42:10 crc kubenswrapper[4580]: I0112 13:42:10.238423 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4s7gj" podStartSLOduration=1.677081005 podStartE2EDuration="4.238377217s" podCreationTimestamp="2026-01-12 13:42:06 +0000 UTC" firstStartedPulling="2026-01-12 13:42:07.172094836 +0000 UTC m=+2126.216313526" lastFinishedPulling="2026-01-12 13:42:09.733391048 +0000 UTC m=+2128.777609738" observedRunningTime="2026-01-12 13:42:10.228787615 +0000 UTC m=+2129.273006304" watchObservedRunningTime="2026-01-12 13:42:10.238377217 +0000 UTC m=+2129.282595907" Jan 12 13:42:16 crc kubenswrapper[4580]: I0112 13:42:16.442481 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:16 crc kubenswrapper[4580]: I0112 13:42:16.443070 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:16 crc kubenswrapper[4580]: I0112 13:42:16.476590 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:16 crc kubenswrapper[4580]: I0112 13:42:16.949257 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:42:16 crc kubenswrapper[4580]: I0112 13:42:16.949336 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:42:17 crc kubenswrapper[4580]: I0112 13:42:17.302093 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:17 crc kubenswrapper[4580]: I0112 13:42:17.358488 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4s7gj"] Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.280581 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4s7gj" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerName="registry-server" containerID="cri-o://bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2" gracePeriod=2 Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.677636 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.813806 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-utilities\") pod \"ab4fb44a-b105-45c6-a15d-9016387050d0\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.814121 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-catalog-content\") pod \"ab4fb44a-b105-45c6-a15d-9016387050d0\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.814571 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8fzm\" (UniqueName: \"kubernetes.io/projected/ab4fb44a-b105-45c6-a15d-9016387050d0-kube-api-access-x8fzm\") pod \"ab4fb44a-b105-45c6-a15d-9016387050d0\" (UID: \"ab4fb44a-b105-45c6-a15d-9016387050d0\") " Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.814737 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-utilities" (OuterVolumeSpecName: "utilities") pod "ab4fb44a-b105-45c6-a15d-9016387050d0" (UID: "ab4fb44a-b105-45c6-a15d-9016387050d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.815458 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.820199 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4fb44a-b105-45c6-a15d-9016387050d0-kube-api-access-x8fzm" (OuterVolumeSpecName: "kube-api-access-x8fzm") pod "ab4fb44a-b105-45c6-a15d-9016387050d0" (UID: "ab4fb44a-b105-45c6-a15d-9016387050d0"). InnerVolumeSpecName "kube-api-access-x8fzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.851506 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab4fb44a-b105-45c6-a15d-9016387050d0" (UID: "ab4fb44a-b105-45c6-a15d-9016387050d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.917868 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4fb44a-b105-45c6-a15d-9016387050d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:19 crc kubenswrapper[4580]: I0112 13:42:19.917898 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8fzm\" (UniqueName: \"kubernetes.io/projected/ab4fb44a-b105-45c6-a15d-9016387050d0-kube-api-access-x8fzm\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.290119 4580 generic.go:334] "Generic (PLEG): container finished" podID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerID="bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2" exitCode=0 Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.290170 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s7gj" event={"ID":"ab4fb44a-b105-45c6-a15d-9016387050d0","Type":"ContainerDied","Data":"bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2"} Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.290197 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s7gj" event={"ID":"ab4fb44a-b105-45c6-a15d-9016387050d0","Type":"ContainerDied","Data":"b9be37f23a11553decb54a7dbaba6f499fc5950274f4e7f92c0be9fcd7205d61"} Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.290219 4580 scope.go:117] "RemoveContainer" containerID="bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.290345 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s7gj" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.333368 4580 scope.go:117] "RemoveContainer" containerID="fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.336727 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4s7gj"] Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.343937 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4s7gj"] Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.367507 4580 scope.go:117] "RemoveContainer" containerID="ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.391562 4580 scope.go:117] "RemoveContainer" containerID="bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2" Jan 12 13:42:20 crc kubenswrapper[4580]: E0112 13:42:20.391902 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2\": container with ID starting with bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2 not found: ID does not exist" containerID="bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.391960 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2"} err="failed to get container status \"bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2\": rpc error: code = NotFound desc = could not find container \"bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2\": container with ID starting with bf25b2ed07f953f6d9d350a1ff6234fcb712cf1ac4981996e9b4db62f7b76df2 not found: ID does not exist" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.391995 4580 scope.go:117] "RemoveContainer" containerID="fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500" Jan 12 13:42:20 crc kubenswrapper[4580]: E0112 13:42:20.392400 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500\": container with ID starting with fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500 not found: ID does not exist" containerID="fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.392457 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500"} err="failed to get container status \"fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500\": rpc error: code = NotFound desc = could not find container \"fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500\": container with ID starting with fc21d6047ddff288c687dbd35dd3647116c4f960e99a664c185c0b06b6396500 not found: ID does not exist" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.392474 4580 scope.go:117] "RemoveContainer" containerID="ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953" Jan 12 13:42:20 crc kubenswrapper[4580]: E0112 13:42:20.392879 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953\": container with ID starting with ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953 not found: ID does not exist" containerID="ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953" Jan 12 13:42:20 crc kubenswrapper[4580]: I0112 13:42:20.392926 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953"} err="failed to get container status \"ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953\": rpc error: code = NotFound desc = could not find container \"ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953\": container with ID starting with ed271961f4d9160bfa5bee3a8e3870fa43901a1665874e8e634186e7c56fa953 not found: ID does not exist" Jan 12 13:42:21 crc kubenswrapper[4580]: I0112 13:42:21.291055 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" path="/var/lib/kubelet/pods/ab4fb44a-b105-45c6-a15d-9016387050d0/volumes" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.303481 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-np47c"] Jan 12 13:42:26 crc kubenswrapper[4580]: E0112 13:42:26.305497 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerName="extract-utilities" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.305588 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerName="extract-utilities" Jan 12 13:42:26 crc kubenswrapper[4580]: E0112 13:42:26.305664 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerName="registry-server" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.305717 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerName="registry-server" Jan 12 13:42:26 crc kubenswrapper[4580]: E0112 13:42:26.305791 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerName="extract-content" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.305843 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerName="extract-content" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.306084 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4fb44a-b105-45c6-a15d-9016387050d0" containerName="registry-server" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.307463 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.312398 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-np47c"] Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.455641 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6h5h\" (UniqueName: \"kubernetes.io/projected/05ab54a2-36f9-47b9-9be9-d215ce85e906-kube-api-access-r6h5h\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.456707 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-utilities\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.457528 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-catalog-content\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.559372 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-catalog-content\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.559534 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6h5h\" (UniqueName: \"kubernetes.io/projected/05ab54a2-36f9-47b9-9be9-d215ce85e906-kube-api-access-r6h5h\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.559732 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-utilities\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.559837 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-catalog-content\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.560028 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-utilities\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.577931 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6h5h\" (UniqueName: \"kubernetes.io/projected/05ab54a2-36f9-47b9-9be9-d215ce85e906-kube-api-access-r6h5h\") pod \"redhat-marketplace-np47c\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:26 crc kubenswrapper[4580]: I0112 13:42:26.629966 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:27 crc kubenswrapper[4580]: I0112 13:42:27.024798 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-np47c"] Jan 12 13:42:27 crc kubenswrapper[4580]: I0112 13:42:27.364956 4580 generic.go:334] "Generic (PLEG): container finished" podID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerID="0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f" exitCode=0 Jan 12 13:42:27 crc kubenswrapper[4580]: I0112 13:42:27.365009 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-np47c" event={"ID":"05ab54a2-36f9-47b9-9be9-d215ce85e906","Type":"ContainerDied","Data":"0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f"} Jan 12 13:42:27 crc kubenswrapper[4580]: I0112 13:42:27.365040 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-np47c" event={"ID":"05ab54a2-36f9-47b9-9be9-d215ce85e906","Type":"ContainerStarted","Data":"b8434981de8a7ccb60f0da24cda2e765f481c22a9075933cc86950e9604dd486"} Jan 12 13:42:29 crc kubenswrapper[4580]: I0112 13:42:29.383209 4580 generic.go:334] "Generic (PLEG): container finished" podID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerID="3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e" exitCode=0 Jan 12 13:42:29 crc kubenswrapper[4580]: I0112 13:42:29.383287 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-np47c" event={"ID":"05ab54a2-36f9-47b9-9be9-d215ce85e906","Type":"ContainerDied","Data":"3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e"} Jan 12 13:42:30 crc kubenswrapper[4580]: I0112 13:42:30.395763 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-np47c" event={"ID":"05ab54a2-36f9-47b9-9be9-d215ce85e906","Type":"ContainerStarted","Data":"3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20"} Jan 12 13:42:30 crc kubenswrapper[4580]: I0112 13:42:30.417283 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-np47c" podStartSLOduration=1.6501995250000001 podStartE2EDuration="4.417271005s" podCreationTimestamp="2026-01-12 13:42:26 +0000 UTC" firstStartedPulling="2026-01-12 13:42:27.367067043 +0000 UTC m=+2146.411285733" lastFinishedPulling="2026-01-12 13:42:30.134138523 +0000 UTC m=+2149.178357213" observedRunningTime="2026-01-12 13:42:30.409084081 +0000 UTC m=+2149.453302770" watchObservedRunningTime="2026-01-12 13:42:30.417271005 +0000 UTC m=+2149.461489695" Jan 12 13:42:36 crc kubenswrapper[4580]: I0112 13:42:36.630496 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:36 crc kubenswrapper[4580]: I0112 13:42:36.631045 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:36 crc kubenswrapper[4580]: I0112 13:42:36.670389 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:37 crc kubenswrapper[4580]: I0112 13:42:37.492765 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:37 crc kubenswrapper[4580]: I0112 13:42:37.539664 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-np47c"] Jan 12 13:42:39 crc kubenswrapper[4580]: I0112 13:42:39.471522 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-np47c" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerName="registry-server" containerID="cri-o://3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20" gracePeriod=2 Jan 12 13:42:39 crc kubenswrapper[4580]: I0112 13:42:39.884644 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:39 crc kubenswrapper[4580]: I0112 13:42:39.928764 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-catalog-content\") pod \"05ab54a2-36f9-47b9-9be9-d215ce85e906\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " Jan 12 13:42:39 crc kubenswrapper[4580]: I0112 13:42:39.928925 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6h5h\" (UniqueName: \"kubernetes.io/projected/05ab54a2-36f9-47b9-9be9-d215ce85e906-kube-api-access-r6h5h\") pod \"05ab54a2-36f9-47b9-9be9-d215ce85e906\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " Jan 12 13:42:39 crc kubenswrapper[4580]: I0112 13:42:39.928956 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-utilities\") pod \"05ab54a2-36f9-47b9-9be9-d215ce85e906\" (UID: \"05ab54a2-36f9-47b9-9be9-d215ce85e906\") " Jan 12 13:42:39 crc kubenswrapper[4580]: I0112 13:42:39.929558 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-utilities" (OuterVolumeSpecName: "utilities") pod "05ab54a2-36f9-47b9-9be9-d215ce85e906" (UID: "05ab54a2-36f9-47b9-9be9-d215ce85e906"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:42:39 crc kubenswrapper[4580]: I0112 13:42:39.933944 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ab54a2-36f9-47b9-9be9-d215ce85e906-kube-api-access-r6h5h" (OuterVolumeSpecName: "kube-api-access-r6h5h") pod "05ab54a2-36f9-47b9-9be9-d215ce85e906" (UID: "05ab54a2-36f9-47b9-9be9-d215ce85e906"). InnerVolumeSpecName "kube-api-access-r6h5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:42:39 crc kubenswrapper[4580]: I0112 13:42:39.944820 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05ab54a2-36f9-47b9-9be9-d215ce85e906" (UID: "05ab54a2-36f9-47b9-9be9-d215ce85e906"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.029873 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6h5h\" (UniqueName: \"kubernetes.io/projected/05ab54a2-36f9-47b9-9be9-d215ce85e906-kube-api-access-r6h5h\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.029901 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.029914 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ab54a2-36f9-47b9-9be9-d215ce85e906-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.483024 4580 generic.go:334] "Generic (PLEG): container finished" podID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerID="3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20" exitCode=0 Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.483081 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-np47c" event={"ID":"05ab54a2-36f9-47b9-9be9-d215ce85e906","Type":"ContainerDied","Data":"3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20"} Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.483138 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-np47c" event={"ID":"05ab54a2-36f9-47b9-9be9-d215ce85e906","Type":"ContainerDied","Data":"b8434981de8a7ccb60f0da24cda2e765f481c22a9075933cc86950e9604dd486"} Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.483162 4580 scope.go:117] "RemoveContainer" containerID="3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.483334 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-np47c" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.521029 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-np47c"] Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.523078 4580 scope.go:117] "RemoveContainer" containerID="3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.527435 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-np47c"] Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.546365 4580 scope.go:117] "RemoveContainer" containerID="0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.581594 4580 scope.go:117] "RemoveContainer" containerID="3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20" Jan 12 13:42:40 crc kubenswrapper[4580]: E0112 13:42:40.582071 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20\": container with ID starting with 3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20 not found: ID does not exist" containerID="3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.582127 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20"} err="failed to get container status \"3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20\": rpc error: code = NotFound desc = could not find container \"3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20\": container with ID starting with 3d1337bd8dd072ae4d0fcdc61deaf86015cf0884582a3ec737c6985a8d3adf20 not found: ID does not exist" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.582150 4580 scope.go:117] "RemoveContainer" containerID="3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e" Jan 12 13:42:40 crc kubenswrapper[4580]: E0112 13:42:40.582395 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e\": container with ID starting with 3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e not found: ID does not exist" containerID="3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.582435 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e"} err="failed to get container status \"3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e\": rpc error: code = NotFound desc = could not find container \"3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e\": container with ID starting with 3bad219878ac439fd8ba794e775a26a6ca7b6d3e561548d11b46bdb4b52d3a4e not found: ID does not exist" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.582461 4580 scope.go:117] "RemoveContainer" containerID="0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f" Jan 12 13:42:40 crc kubenswrapper[4580]: E0112 13:42:40.582688 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f\": container with ID starting with 0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f not found: ID does not exist" containerID="0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f" Jan 12 13:42:40 crc kubenswrapper[4580]: I0112 13:42:40.582711 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f"} err="failed to get container status \"0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f\": rpc error: code = NotFound desc = could not find container \"0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f\": container with ID starting with 0920f6805f7325021517473e04e085b8c5ace06b44630655e0c011acc127593f not found: ID does not exist" Jan 12 13:42:41 crc kubenswrapper[4580]: I0112 13:42:41.297312 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" path="/var/lib/kubelet/pods/05ab54a2-36f9-47b9-9be9-d215ce85e906/volumes" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.025717 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 12 13:42:43 crc kubenswrapper[4580]: E0112 13:42:43.026472 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerName="extract-utilities" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.026492 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerName="extract-utilities" Jan 12 13:42:43 crc kubenswrapper[4580]: E0112 13:42:43.026531 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerName="registry-server" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.026537 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerName="registry-server" Jan 12 13:42:43 crc kubenswrapper[4580]: E0112 13:42:43.026576 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerName="extract-content" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.026582 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerName="extract-content" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.026793 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ab54a2-36f9-47b9-9be9-d215ce85e906" containerName="registry-server" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.027625 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.030325 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.031040 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.031357 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2tdfr" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.031502 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.045756 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.090936 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.090972 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-config-data\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.091006 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.192829 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.192962 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zhn\" (UniqueName: \"kubernetes.io/projected/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-kube-api-access-f9zhn\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.193018 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.193044 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-config-data\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.193093 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.193151 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.193215 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.193269 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.193322 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.194479 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.194560 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-config-data\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.200825 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.294993 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zhn\" (UniqueName: \"kubernetes.io/projected/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-kube-api-access-f9zhn\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.295057 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.295093 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.295136 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.295163 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.295218 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.295811 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.295893 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.296136 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.300070 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.301781 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.310847 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zhn\" (UniqueName: \"kubernetes.io/projected/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-kube-api-access-f9zhn\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.319309 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.351182 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 12 13:42:43 crc kubenswrapper[4580]: I0112 13:42:43.752766 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 12 13:42:44 crc kubenswrapper[4580]: I0112 13:42:44.523807 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a","Type":"ContainerStarted","Data":"24553f7e59b2aaab036b7405a5929b2cac4dcccabb03468f44af974d0f04c2d1"} Jan 12 13:42:46 crc kubenswrapper[4580]: I0112 13:42:46.949617 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:42:46 crc kubenswrapper[4580]: I0112 13:42:46.949894 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.137590 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sgrn4"] Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.139739 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.149811 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgrn4"] Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.245922 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-utilities\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.246381 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-catalog-content\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.246682 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm2w2\" (UniqueName: \"kubernetes.io/projected/63f848ea-8cf2-4f85-8b2c-355aa8fad467-kube-api-access-fm2w2\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.350404 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm2w2\" (UniqueName: \"kubernetes.io/projected/63f848ea-8cf2-4f85-8b2c-355aa8fad467-kube-api-access-fm2w2\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.350795 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-utilities\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.351198 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-utilities\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.351491 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-catalog-content\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.351719 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-catalog-content\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.369784 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm2w2\" (UniqueName: \"kubernetes.io/projected/63f848ea-8cf2-4f85-8b2c-355aa8fad467-kube-api-access-fm2w2\") pod \"redhat-operators-sgrn4\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:42:56 crc kubenswrapper[4580]: I0112 13:42:56.457539 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:43:13 crc kubenswrapper[4580]: E0112 13:43:13.461680 4580 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 12 13:43:13 crc kubenswrapper[4580]: E0112 13:43:13.462199 4580 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9zhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(8e031ef3-1afa-438b-8f95-cd63e4d5eb5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 12 13:43:13 crc kubenswrapper[4580]: E0112 13:43:13.463635 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" Jan 12 13:43:13 crc kubenswrapper[4580]: I0112 13:43:13.795342 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgrn4"] Jan 12 13:43:13 crc kubenswrapper[4580]: I0112 13:43:13.828243 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrn4" event={"ID":"63f848ea-8cf2-4f85-8b2c-355aa8fad467","Type":"ContainerStarted","Data":"8c517c5549d0c67a1986d231d271c5fda0cbbc6a19e273f3aada01f2af117d73"} Jan 12 13:43:13 crc kubenswrapper[4580]: E0112 13:43:13.833738 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" Jan 12 13:43:14 crc kubenswrapper[4580]: I0112 13:43:14.838648 4580 generic.go:334] "Generic (PLEG): container finished" podID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerID="1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915" exitCode=0 Jan 12 13:43:14 crc kubenswrapper[4580]: I0112 13:43:14.838754 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrn4" event={"ID":"63f848ea-8cf2-4f85-8b2c-355aa8fad467","Type":"ContainerDied","Data":"1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915"} Jan 12 13:43:16 crc kubenswrapper[4580]: I0112 13:43:16.865671 4580 generic.go:334] "Generic (PLEG): container finished" podID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerID="9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0" exitCode=0 Jan 12 13:43:16 crc kubenswrapper[4580]: I0112 13:43:16.865774 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrn4" event={"ID":"63f848ea-8cf2-4f85-8b2c-355aa8fad467","Type":"ContainerDied","Data":"9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0"} Jan 12 13:43:16 crc kubenswrapper[4580]: I0112 13:43:16.949016 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:43:16 crc kubenswrapper[4580]: I0112 13:43:16.949087 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:43:16 crc kubenswrapper[4580]: I0112 13:43:16.949155 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:43:16 crc kubenswrapper[4580]: I0112 13:43:16.949822 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcfabc6cf0a38065a0248083d6b03cb83a27c1814f0aa02c26308ff07404d5b7"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:43:16 crc kubenswrapper[4580]: I0112 13:43:16.949901 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://fcfabc6cf0a38065a0248083d6b03cb83a27c1814f0aa02c26308ff07404d5b7" gracePeriod=600 Jan 12 13:43:17 crc kubenswrapper[4580]: I0112 13:43:17.879364 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrn4" event={"ID":"63f848ea-8cf2-4f85-8b2c-355aa8fad467","Type":"ContainerStarted","Data":"5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2"} Jan 12 13:43:17 crc kubenswrapper[4580]: I0112 13:43:17.882872 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="fcfabc6cf0a38065a0248083d6b03cb83a27c1814f0aa02c26308ff07404d5b7" exitCode=0 Jan 12 13:43:17 crc kubenswrapper[4580]: I0112 13:43:17.882940 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"fcfabc6cf0a38065a0248083d6b03cb83a27c1814f0aa02c26308ff07404d5b7"} Jan 12 13:43:17 crc kubenswrapper[4580]: I0112 13:43:17.883012 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c"} Jan 12 13:43:17 crc kubenswrapper[4580]: I0112 13:43:17.883046 4580 scope.go:117] "RemoveContainer" containerID="e6f7529ae288176e91bf12545260cc5495f693307c412bb9a076090d438a7eb1" Jan 12 13:43:17 crc kubenswrapper[4580]: I0112 13:43:17.910634 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sgrn4" podStartSLOduration=19.261116884 podStartE2EDuration="21.910616532s" podCreationTimestamp="2026-01-12 13:42:56 +0000 UTC" firstStartedPulling="2026-01-12 13:43:14.841441649 +0000 UTC m=+2193.885660339" lastFinishedPulling="2026-01-12 13:43:17.490941297 +0000 UTC m=+2196.535159987" observedRunningTime="2026-01-12 13:43:17.898788772 +0000 UTC m=+2196.943007462" watchObservedRunningTime="2026-01-12 13:43:17.910616532 +0000 UTC m=+2196.954835222" Jan 12 13:43:26 crc kubenswrapper[4580]: I0112 13:43:26.458293 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:43:26 crc kubenswrapper[4580]: I0112 13:43:26.459026 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:43:26 crc kubenswrapper[4580]: I0112 13:43:26.504524 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:43:27 crc kubenswrapper[4580]: I0112 13:43:27.017883 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:43:27 crc kubenswrapper[4580]: I0112 13:43:27.341985 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgrn4"] Jan 12 13:43:28 crc kubenswrapper[4580]: I0112 13:43:28.996874 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sgrn4" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerName="registry-server" containerID="cri-o://5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2" gracePeriod=2 Jan 12 13:43:29 crc kubenswrapper[4580]: E0112 13:43:29.046967 4580 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63f848ea_8cf2_4f85_8b2c_355aa8fad467.slice/crio-5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2.scope\": RecentStats: unable to find data in memory cache]" Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.445119 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.550267 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-utilities\") pod \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.550474 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm2w2\" (UniqueName: \"kubernetes.io/projected/63f848ea-8cf2-4f85-8b2c-355aa8fad467-kube-api-access-fm2w2\") pod \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.550640 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-catalog-content\") pod \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\" (UID: \"63f848ea-8cf2-4f85-8b2c-355aa8fad467\") " Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.551386 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-utilities" (OuterVolumeSpecName: "utilities") pod "63f848ea-8cf2-4f85-8b2c-355aa8fad467" (UID: "63f848ea-8cf2-4f85-8b2c-355aa8fad467"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.552249 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.556512 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f848ea-8cf2-4f85-8b2c-355aa8fad467-kube-api-access-fm2w2" (OuterVolumeSpecName: "kube-api-access-fm2w2") pod "63f848ea-8cf2-4f85-8b2c-355aa8fad467" (UID: "63f848ea-8cf2-4f85-8b2c-355aa8fad467"). InnerVolumeSpecName "kube-api-access-fm2w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.639896 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63f848ea-8cf2-4f85-8b2c-355aa8fad467" (UID: "63f848ea-8cf2-4f85-8b2c-355aa8fad467"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.654349 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm2w2\" (UniqueName: \"kubernetes.io/projected/63f848ea-8cf2-4f85-8b2c-355aa8fad467-kube-api-access-fm2w2\") on node \"crc\" DevicePath \"\"" Jan 12 13:43:29 crc kubenswrapper[4580]: I0112 13:43:29.654377 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63f848ea-8cf2-4f85-8b2c-355aa8fad467-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.007020 4580 generic.go:334] "Generic (PLEG): container finished" podID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerID="5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2" exitCode=0 Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.007079 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgrn4" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.007126 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrn4" event={"ID":"63f848ea-8cf2-4f85-8b2c-355aa8fad467","Type":"ContainerDied","Data":"5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2"} Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.008266 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgrn4" event={"ID":"63f848ea-8cf2-4f85-8b2c-355aa8fad467","Type":"ContainerDied","Data":"8c517c5549d0c67a1986d231d271c5fda0cbbc6a19e273f3aada01f2af117d73"} Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.008295 4580 scope.go:117] "RemoveContainer" containerID="5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.011150 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a","Type":"ContainerStarted","Data":"78cb17895662374c91348f1c4d5991fc02889c2c3effa9b00ba074810d1bcdb8"} Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.028504 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.012095323 podStartE2EDuration="48.028484781s" podCreationTimestamp="2026-01-12 13:42:42 +0000 UTC" firstStartedPulling="2026-01-12 13:42:43.758403536 +0000 UTC m=+2162.802622226" lastFinishedPulling="2026-01-12 13:43:28.774792994 +0000 UTC m=+2207.819011684" observedRunningTime="2026-01-12 13:43:30.026781458 +0000 UTC m=+2209.071000148" watchObservedRunningTime="2026-01-12 13:43:30.028484781 +0000 UTC m=+2209.072703471" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.038576 4580 scope.go:117] "RemoveContainer" containerID="9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.047509 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgrn4"] Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.054009 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sgrn4"] Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.058890 4580 scope.go:117] "RemoveContainer" containerID="1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.093495 4580 scope.go:117] "RemoveContainer" containerID="5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2" Jan 12 13:43:30 crc kubenswrapper[4580]: E0112 13:43:30.093998 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2\": container with ID starting with 5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2 not found: ID does not exist" containerID="5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.094042 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2"} err="failed to get container status \"5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2\": rpc error: code = NotFound desc = could not find container \"5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2\": container with ID starting with 5594b79ec5112d44de8f647ef1e2660be3ed06a69c2ef383837241c12dad02b2 not found: ID does not exist" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.094069 4580 scope.go:117] "RemoveContainer" containerID="9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0" Jan 12 13:43:30 crc kubenswrapper[4580]: E0112 13:43:30.094555 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0\": container with ID starting with 9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0 not found: ID does not exist" containerID="9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.094599 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0"} err="failed to get container status \"9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0\": rpc error: code = NotFound desc = could not find container \"9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0\": container with ID starting with 9bcdfbe3457150ff365522260bea80cb0d68dd63d3ff1f26e7c0e1a6d313a0c0 not found: ID does not exist" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.094633 4580 scope.go:117] "RemoveContainer" containerID="1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915" Jan 12 13:43:30 crc kubenswrapper[4580]: E0112 13:43:30.094893 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915\": container with ID starting with 1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915 not found: ID does not exist" containerID="1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915" Jan 12 13:43:30 crc kubenswrapper[4580]: I0112 13:43:30.094917 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915"} err="failed to get container status \"1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915\": rpc error: code = NotFound desc = could not find container \"1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915\": container with ID starting with 1e23d5cafa7641b141f93c45e9a90048c18e350bc49e04dfce67789cf1d84915 not found: ID does not exist" Jan 12 13:43:31 crc kubenswrapper[4580]: I0112 13:43:31.297081 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" path="/var/lib/kubelet/pods/63f848ea-8cf2-4f85-8b2c-355aa8fad467/volumes" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.339242 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mk6t8"] Jan 12 13:43:42 crc kubenswrapper[4580]: E0112 13:43:42.340279 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerName="extract-utilities" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.340295 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerName="extract-utilities" Jan 12 13:43:42 crc kubenswrapper[4580]: E0112 13:43:42.340311 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerName="extract-content" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.340316 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerName="extract-content" Jan 12 13:43:42 crc kubenswrapper[4580]: E0112 13:43:42.340351 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerName="registry-server" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.340357 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerName="registry-server" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.340604 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f848ea-8cf2-4f85-8b2c-355aa8fad467" containerName="registry-server" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.342126 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.346779 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk6t8"] Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.435981 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-utilities\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.436118 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-564hm\" (UniqueName: \"kubernetes.io/projected/7a55e54f-8304-469c-a262-385a0ab6fec7-kube-api-access-564hm\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.436173 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-catalog-content\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.538567 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-utilities\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.538834 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-564hm\" (UniqueName: \"kubernetes.io/projected/7a55e54f-8304-469c-a262-385a0ab6fec7-kube-api-access-564hm\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.539016 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-catalog-content\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.539157 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-utilities\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.539617 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-catalog-content\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.567851 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-564hm\" (UniqueName: \"kubernetes.io/projected/7a55e54f-8304-469c-a262-385a0ab6fec7-kube-api-access-564hm\") pod \"community-operators-mk6t8\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:42 crc kubenswrapper[4580]: I0112 13:43:42.666908 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:43 crc kubenswrapper[4580]: I0112 13:43:43.162022 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk6t8"] Jan 12 13:43:44 crc kubenswrapper[4580]: I0112 13:43:44.152048 4580 generic.go:334] "Generic (PLEG): container finished" podID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerID="6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793" exitCode=0 Jan 12 13:43:44 crc kubenswrapper[4580]: I0112 13:43:44.152271 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t8" event={"ID":"7a55e54f-8304-469c-a262-385a0ab6fec7","Type":"ContainerDied","Data":"6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793"} Jan 12 13:43:44 crc kubenswrapper[4580]: I0112 13:43:44.152563 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t8" event={"ID":"7a55e54f-8304-469c-a262-385a0ab6fec7","Type":"ContainerStarted","Data":"0e72b44213e7edc32b637e0aa354adeb4ddf3b1ba9837dce1c45d118ef1997a2"} Jan 12 13:43:44 crc kubenswrapper[4580]: I0112 13:43:44.155239 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:43:45 crc kubenswrapper[4580]: I0112 13:43:45.165486 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t8" event={"ID":"7a55e54f-8304-469c-a262-385a0ab6fec7","Type":"ContainerStarted","Data":"344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa"} Jan 12 13:43:46 crc kubenswrapper[4580]: I0112 13:43:46.178191 4580 generic.go:334] "Generic (PLEG): container finished" podID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerID="344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa" exitCode=0 Jan 12 13:43:46 crc kubenswrapper[4580]: I0112 13:43:46.178263 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t8" event={"ID":"7a55e54f-8304-469c-a262-385a0ab6fec7","Type":"ContainerDied","Data":"344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa"} Jan 12 13:43:48 crc kubenswrapper[4580]: I0112 13:43:48.206200 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t8" event={"ID":"7a55e54f-8304-469c-a262-385a0ab6fec7","Type":"ContainerStarted","Data":"47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7"} Jan 12 13:43:48 crc kubenswrapper[4580]: I0112 13:43:48.231881 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mk6t8" podStartSLOduration=3.295906351 podStartE2EDuration="6.231858862s" podCreationTimestamp="2026-01-12 13:43:42 +0000 UTC" firstStartedPulling="2026-01-12 13:43:44.154898737 +0000 UTC m=+2223.199117427" lastFinishedPulling="2026-01-12 13:43:47.090851248 +0000 UTC m=+2226.135069938" observedRunningTime="2026-01-12 13:43:48.225819357 +0000 UTC m=+2227.270038047" watchObservedRunningTime="2026-01-12 13:43:48.231858862 +0000 UTC m=+2227.276077552" Jan 12 13:43:52 crc kubenswrapper[4580]: I0112 13:43:52.668471 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:52 crc kubenswrapper[4580]: I0112 13:43:52.669007 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:52 crc kubenswrapper[4580]: I0112 13:43:52.716803 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:53 crc kubenswrapper[4580]: I0112 13:43:53.292635 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:53 crc kubenswrapper[4580]: I0112 13:43:53.350348 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk6t8"] Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.264844 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mk6t8" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerName="registry-server" containerID="cri-o://47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7" gracePeriod=2 Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.671608 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.723611 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-564hm\" (UniqueName: \"kubernetes.io/projected/7a55e54f-8304-469c-a262-385a0ab6fec7-kube-api-access-564hm\") pod \"7a55e54f-8304-469c-a262-385a0ab6fec7\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.723938 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-catalog-content\") pod \"7a55e54f-8304-469c-a262-385a0ab6fec7\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.724166 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-utilities\") pod \"7a55e54f-8304-469c-a262-385a0ab6fec7\" (UID: \"7a55e54f-8304-469c-a262-385a0ab6fec7\") " Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.726360 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-utilities" (OuterVolumeSpecName: "utilities") pod "7a55e54f-8304-469c-a262-385a0ab6fec7" (UID: "7a55e54f-8304-469c-a262-385a0ab6fec7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.731330 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a55e54f-8304-469c-a262-385a0ab6fec7-kube-api-access-564hm" (OuterVolumeSpecName: "kube-api-access-564hm") pod "7a55e54f-8304-469c-a262-385a0ab6fec7" (UID: "7a55e54f-8304-469c-a262-385a0ab6fec7"). InnerVolumeSpecName "kube-api-access-564hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.768407 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a55e54f-8304-469c-a262-385a0ab6fec7" (UID: "7a55e54f-8304-469c-a262-385a0ab6fec7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.828928 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.828966 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a55e54f-8304-469c-a262-385a0ab6fec7-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:43:55 crc kubenswrapper[4580]: I0112 13:43:55.828977 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-564hm\" (UniqueName: \"kubernetes.io/projected/7a55e54f-8304-469c-a262-385a0ab6fec7-kube-api-access-564hm\") on node \"crc\" DevicePath \"\"" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.279912 4580 generic.go:334] "Generic (PLEG): container finished" podID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerID="47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7" exitCode=0 Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.279973 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t8" event={"ID":"7a55e54f-8304-469c-a262-385a0ab6fec7","Type":"ContainerDied","Data":"47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7"} Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.280014 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t8" event={"ID":"7a55e54f-8304-469c-a262-385a0ab6fec7","Type":"ContainerDied","Data":"0e72b44213e7edc32b637e0aa354adeb4ddf3b1ba9837dce1c45d118ef1997a2"} Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.280015 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t8" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.280035 4580 scope.go:117] "RemoveContainer" containerID="47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.303270 4580 scope.go:117] "RemoveContainer" containerID="344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.314798 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk6t8"] Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.319055 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mk6t8"] Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.337913 4580 scope.go:117] "RemoveContainer" containerID="6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.385914 4580 scope.go:117] "RemoveContainer" containerID="47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7" Jan 12 13:43:56 crc kubenswrapper[4580]: E0112 13:43:56.387407 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7\": container with ID starting with 47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7 not found: ID does not exist" containerID="47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.387440 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7"} err="failed to get container status \"47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7\": rpc error: code = NotFound desc = could not find container \"47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7\": container with ID starting with 47852f0031bda65d89df2ee3e019d2f672c573fe2ef92cdd5251d39fe15764f7 not found: ID does not exist" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.387474 4580 scope.go:117] "RemoveContainer" containerID="344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa" Jan 12 13:43:56 crc kubenswrapper[4580]: E0112 13:43:56.391296 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa\": container with ID starting with 344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa not found: ID does not exist" containerID="344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.391320 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa"} err="failed to get container status \"344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa\": rpc error: code = NotFound desc = could not find container \"344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa\": container with ID starting with 344e138c66d4364922792cfb53da697528abcdda0b9a9105c0e4b28f7db39aaa not found: ID does not exist" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.391335 4580 scope.go:117] "RemoveContainer" containerID="6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793" Jan 12 13:43:56 crc kubenswrapper[4580]: E0112 13:43:56.394493 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793\": container with ID starting with 6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793 not found: ID does not exist" containerID="6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793" Jan 12 13:43:56 crc kubenswrapper[4580]: I0112 13:43:56.394517 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793"} err="failed to get container status \"6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793\": rpc error: code = NotFound desc = could not find container \"6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793\": container with ID starting with 6356b2c0f76f926899083b64c62fed36901de46d2a95b2610cf4bb243fa17793 not found: ID does not exist" Jan 12 13:43:57 crc kubenswrapper[4580]: I0112 13:43:57.297534 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" path="/var/lib/kubelet/pods/7a55e54f-8304-469c-a262-385a0ab6fec7/volumes" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.141162 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs"] Jan 12 13:45:00 crc kubenswrapper[4580]: E0112 13:45:00.143010 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerName="extract-utilities" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.143133 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerName="extract-utilities" Jan 12 13:45:00 crc kubenswrapper[4580]: E0112 13:45:00.143246 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerName="extract-content" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.143304 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerName="extract-content" Jan 12 13:45:00 crc kubenswrapper[4580]: E0112 13:45:00.143375 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerName="registry-server" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.143430 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerName="registry-server" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.143758 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a55e54f-8304-469c-a262-385a0ab6fec7" containerName="registry-server" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.144502 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.146089 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.146677 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.160623 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs"] Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.206709 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7cm9\" (UniqueName: \"kubernetes.io/projected/ef319ef0-89e4-493d-859f-952fa9f99a45-kube-api-access-k7cm9\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.206774 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef319ef0-89e4-493d-859f-952fa9f99a45-config-volume\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.206939 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef319ef0-89e4-493d-859f-952fa9f99a45-secret-volume\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.308424 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7cm9\" (UniqueName: \"kubernetes.io/projected/ef319ef0-89e4-493d-859f-952fa9f99a45-kube-api-access-k7cm9\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.308802 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef319ef0-89e4-493d-859f-952fa9f99a45-config-volume\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.309764 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef319ef0-89e4-493d-859f-952fa9f99a45-config-volume\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.309875 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef319ef0-89e4-493d-859f-952fa9f99a45-secret-volume\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.324767 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef319ef0-89e4-493d-859f-952fa9f99a45-secret-volume\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.326567 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7cm9\" (UniqueName: \"kubernetes.io/projected/ef319ef0-89e4-493d-859f-952fa9f99a45-kube-api-access-k7cm9\") pod \"collect-profiles-29470425-bslxs\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.461264 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:00 crc kubenswrapper[4580]: I0112 13:45:00.885088 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs"] Jan 12 13:45:01 crc kubenswrapper[4580]: I0112 13:45:01.856054 4580 generic.go:334] "Generic (PLEG): container finished" podID="ef319ef0-89e4-493d-859f-952fa9f99a45" containerID="db8db525c0f8cd69d7ffaa8f4fcbd26f73126f748eae24c5c8199ea9a01e2e7c" exitCode=0 Jan 12 13:45:01 crc kubenswrapper[4580]: I0112 13:45:01.856154 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" event={"ID":"ef319ef0-89e4-493d-859f-952fa9f99a45","Type":"ContainerDied","Data":"db8db525c0f8cd69d7ffaa8f4fcbd26f73126f748eae24c5c8199ea9a01e2e7c"} Jan 12 13:45:01 crc kubenswrapper[4580]: I0112 13:45:01.856499 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" event={"ID":"ef319ef0-89e4-493d-859f-952fa9f99a45","Type":"ContainerStarted","Data":"16c2e3088413115ace461bd30637ed4ff35add447c157532a8b3a0a0ba38b4d2"} Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.161759 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.175643 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef319ef0-89e4-493d-859f-952fa9f99a45-config-volume\") pod \"ef319ef0-89e4-493d-859f-952fa9f99a45\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.175690 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7cm9\" (UniqueName: \"kubernetes.io/projected/ef319ef0-89e4-493d-859f-952fa9f99a45-kube-api-access-k7cm9\") pod \"ef319ef0-89e4-493d-859f-952fa9f99a45\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.175718 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef319ef0-89e4-493d-859f-952fa9f99a45-secret-volume\") pod \"ef319ef0-89e4-493d-859f-952fa9f99a45\" (UID: \"ef319ef0-89e4-493d-859f-952fa9f99a45\") " Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.176352 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef319ef0-89e4-493d-859f-952fa9f99a45-config-volume" (OuterVolumeSpecName: "config-volume") pod "ef319ef0-89e4-493d-859f-952fa9f99a45" (UID: "ef319ef0-89e4-493d-859f-952fa9f99a45"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.180948 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef319ef0-89e4-493d-859f-952fa9f99a45-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ef319ef0-89e4-493d-859f-952fa9f99a45" (UID: "ef319ef0-89e4-493d-859f-952fa9f99a45"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.181170 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef319ef0-89e4-493d-859f-952fa9f99a45-kube-api-access-k7cm9" (OuterVolumeSpecName: "kube-api-access-k7cm9") pod "ef319ef0-89e4-493d-859f-952fa9f99a45" (UID: "ef319ef0-89e4-493d-859f-952fa9f99a45"). InnerVolumeSpecName "kube-api-access-k7cm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.278151 4580 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef319ef0-89e4-493d-859f-952fa9f99a45-config-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.278188 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7cm9\" (UniqueName: \"kubernetes.io/projected/ef319ef0-89e4-493d-859f-952fa9f99a45-kube-api-access-k7cm9\") on node \"crc\" DevicePath \"\"" Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.278201 4580 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef319ef0-89e4-493d-859f-952fa9f99a45-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.872003 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" event={"ID":"ef319ef0-89e4-493d-859f-952fa9f99a45","Type":"ContainerDied","Data":"16c2e3088413115ace461bd30637ed4ff35add447c157532a8b3a0a0ba38b4d2"} Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.872240 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16c2e3088413115ace461bd30637ed4ff35add447c157532a8b3a0a0ba38b4d2" Jan 12 13:45:03 crc kubenswrapper[4580]: I0112 13:45:03.872064 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470425-bslxs" Jan 12 13:45:04 crc kubenswrapper[4580]: I0112 13:45:04.226943 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7"] Jan 12 13:45:04 crc kubenswrapper[4580]: I0112 13:45:04.232623 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470380-nk5n7"] Jan 12 13:45:05 crc kubenswrapper[4580]: I0112 13:45:05.293457 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037a95c2-1119-4fd8-8499-682fba2f03ea" path="/var/lib/kubelet/pods/037a95c2-1119-4fd8-8499-682fba2f03ea/volumes" Jan 12 13:45:13 crc kubenswrapper[4580]: I0112 13:45:13.450639 4580 scope.go:117] "RemoveContainer" containerID="429218504b15d49c71ec491ade7f77e78c38b8310607579b22cd67a199946598" Jan 12 13:45:46 crc kubenswrapper[4580]: I0112 13:45:46.949344 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:45:46 crc kubenswrapper[4580]: I0112 13:45:46.949932 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:46:16 crc kubenswrapper[4580]: I0112 13:46:16.949332 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:46:16 crc kubenswrapper[4580]: I0112 13:46:16.950172 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:46:46 crc kubenswrapper[4580]: I0112 13:46:46.949520 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:46:46 crc kubenswrapper[4580]: I0112 13:46:46.950205 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:46:46 crc kubenswrapper[4580]: I0112 13:46:46.950271 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:46:46 crc kubenswrapper[4580]: I0112 13:46:46.950900 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:46:46 crc kubenswrapper[4580]: I0112 13:46:46.950972 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" gracePeriod=600 Jan 12 13:46:47 crc kubenswrapper[4580]: E0112 13:46:47.077566 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:46:47 crc kubenswrapper[4580]: I0112 13:46:47.784289 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" exitCode=0 Jan 12 13:46:47 crc kubenswrapper[4580]: I0112 13:46:47.784389 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c"} Jan 12 13:46:47 crc kubenswrapper[4580]: I0112 13:46:47.784692 4580 scope.go:117] "RemoveContainer" containerID="fcfabc6cf0a38065a0248083d6b03cb83a27c1814f0aa02c26308ff07404d5b7" Jan 12 13:46:47 crc kubenswrapper[4580]: I0112 13:46:47.785275 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:46:47 crc kubenswrapper[4580]: E0112 13:46:47.785612 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:47:03 crc kubenswrapper[4580]: I0112 13:47:03.282297 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:47:03 crc kubenswrapper[4580]: E0112 13:47:03.283248 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:47:18 crc kubenswrapper[4580]: I0112 13:47:18.282095 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:47:18 crc kubenswrapper[4580]: E0112 13:47:18.283061 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:47:33 crc kubenswrapper[4580]: I0112 13:47:33.283477 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:47:33 crc kubenswrapper[4580]: E0112 13:47:33.284243 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:47:44 crc kubenswrapper[4580]: I0112 13:47:44.282223 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:47:44 crc kubenswrapper[4580]: E0112 13:47:44.283280 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:47:59 crc kubenswrapper[4580]: I0112 13:47:59.282232 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:47:59 crc kubenswrapper[4580]: E0112 13:47:59.283265 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:48:11 crc kubenswrapper[4580]: I0112 13:48:11.287020 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:48:11 crc kubenswrapper[4580]: E0112 13:48:11.287997 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:48:24 crc kubenswrapper[4580]: I0112 13:48:24.281677 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:48:24 crc kubenswrapper[4580]: E0112 13:48:24.282620 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:48:39 crc kubenswrapper[4580]: I0112 13:48:39.282007 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:48:39 crc kubenswrapper[4580]: E0112 13:48:39.282780 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:48:51 crc kubenswrapper[4580]: I0112 13:48:51.288254 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:48:51 crc kubenswrapper[4580]: E0112 13:48:51.289284 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:49:06 crc kubenswrapper[4580]: I0112 13:49:06.281549 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:49:06 crc kubenswrapper[4580]: E0112 13:49:06.282401 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:49:19 crc kubenswrapper[4580]: I0112 13:49:19.282745 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:49:19 crc kubenswrapper[4580]: E0112 13:49:19.283566 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:49:32 crc kubenswrapper[4580]: I0112 13:49:32.282219 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:49:32 crc kubenswrapper[4580]: E0112 13:49:32.283150 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:49:47 crc kubenswrapper[4580]: I0112 13:49:47.282031 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:49:47 crc kubenswrapper[4580]: E0112 13:49:47.283086 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:50:01 crc kubenswrapper[4580]: I0112 13:50:01.287700 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:50:01 crc kubenswrapper[4580]: E0112 13:50:01.290012 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:50:13 crc kubenswrapper[4580]: I0112 13:50:13.281941 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:50:13 crc kubenswrapper[4580]: E0112 13:50:13.282876 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:50:25 crc kubenswrapper[4580]: I0112 13:50:25.282589 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:50:25 crc kubenswrapper[4580]: E0112 13:50:25.283562 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:50:38 crc kubenswrapper[4580]: I0112 13:50:38.283227 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:50:38 crc kubenswrapper[4580]: E0112 13:50:38.284068 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:50:49 crc kubenswrapper[4580]: I0112 13:50:49.282122 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:50:49 crc kubenswrapper[4580]: E0112 13:50:49.283121 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:51:01 crc kubenswrapper[4580]: I0112 13:51:01.287961 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:51:01 crc kubenswrapper[4580]: E0112 13:51:01.289144 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:51:16 crc kubenswrapper[4580]: I0112 13:51:16.281731 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:51:16 crc kubenswrapper[4580]: E0112 13:51:16.282610 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:51:27 crc kubenswrapper[4580]: I0112 13:51:27.281741 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:51:27 crc kubenswrapper[4580]: E0112 13:51:27.282818 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:51:39 crc kubenswrapper[4580]: I0112 13:51:39.282548 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:51:39 crc kubenswrapper[4580]: E0112 13:51:39.283551 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:51:53 crc kubenswrapper[4580]: I0112 13:51:53.282946 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:51:53 crc kubenswrapper[4580]: I0112 13:51:53.531195 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"f056eb17695b732b04c0728c626e050e3ff330dfd43e35dfe03fa9c3f1091798"} Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.343877 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-29wc2"] Jan 12 13:52:06 crc kubenswrapper[4580]: E0112 13:52:06.344966 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef319ef0-89e4-493d-859f-952fa9f99a45" containerName="collect-profiles" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.344981 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef319ef0-89e4-493d-859f-952fa9f99a45" containerName="collect-profiles" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.345197 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef319ef0-89e4-493d-859f-952fa9f99a45" containerName="collect-profiles" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.346645 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.356584 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29wc2"] Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.428596 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2qgw\" (UniqueName: \"kubernetes.io/projected/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-kube-api-access-l2qgw\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.428672 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-catalog-content\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.428852 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-utilities\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.531132 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-catalog-content\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.531195 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-utilities\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.531358 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2qgw\" (UniqueName: \"kubernetes.io/projected/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-kube-api-access-l2qgw\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.531690 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-catalog-content\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.531739 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-utilities\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.553070 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2qgw\" (UniqueName: \"kubernetes.io/projected/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-kube-api-access-l2qgw\") pod \"certified-operators-29wc2\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:06 crc kubenswrapper[4580]: I0112 13:52:06.662723 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:07 crc kubenswrapper[4580]: I0112 13:52:07.140626 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29wc2"] Jan 12 13:52:07 crc kubenswrapper[4580]: I0112 13:52:07.660503 4580 generic.go:334] "Generic (PLEG): container finished" podID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerID="e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c" exitCode=0 Jan 12 13:52:07 crc kubenswrapper[4580]: I0112 13:52:07.660586 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29wc2" event={"ID":"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b","Type":"ContainerDied","Data":"e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c"} Jan 12 13:52:07 crc kubenswrapper[4580]: I0112 13:52:07.660645 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29wc2" event={"ID":"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b","Type":"ContainerStarted","Data":"bbec603b011b352218f38af40cfa57767022d2556495aba9ae0eaf0e9c737292"} Jan 12 13:52:07 crc kubenswrapper[4580]: I0112 13:52:07.663452 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:52:09 crc kubenswrapper[4580]: I0112 13:52:09.686443 4580 generic.go:334] "Generic (PLEG): container finished" podID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerID="8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9" exitCode=0 Jan 12 13:52:09 crc kubenswrapper[4580]: I0112 13:52:09.686499 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29wc2" event={"ID":"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b","Type":"ContainerDied","Data":"8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9"} Jan 12 13:52:10 crc kubenswrapper[4580]: I0112 13:52:10.696625 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29wc2" event={"ID":"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b","Type":"ContainerStarted","Data":"5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7"} Jan 12 13:52:10 crc kubenswrapper[4580]: I0112 13:52:10.713619 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-29wc2" podStartSLOduration=2.020305422 podStartE2EDuration="4.713603467s" podCreationTimestamp="2026-01-12 13:52:06 +0000 UTC" firstStartedPulling="2026-01-12 13:52:07.663055213 +0000 UTC m=+2726.707273903" lastFinishedPulling="2026-01-12 13:52:10.356353258 +0000 UTC m=+2729.400571948" observedRunningTime="2026-01-12 13:52:10.712017825 +0000 UTC m=+2729.756236515" watchObservedRunningTime="2026-01-12 13:52:10.713603467 +0000 UTC m=+2729.757822157" Jan 12 13:52:16 crc kubenswrapper[4580]: I0112 13:52:16.663152 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:16 crc kubenswrapper[4580]: I0112 13:52:16.663902 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:16 crc kubenswrapper[4580]: I0112 13:52:16.704690 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:16 crc kubenswrapper[4580]: I0112 13:52:16.750822 4580 generic.go:334] "Generic (PLEG): container finished" podID="8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" containerID="78cb17895662374c91348f1c4d5991fc02889c2c3effa9b00ba074810d1bcdb8" exitCode=0 Jan 12 13:52:16 crc kubenswrapper[4580]: I0112 13:52:16.750912 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a","Type":"ContainerDied","Data":"78cb17895662374c91348f1c4d5991fc02889c2c3effa9b00ba074810d1bcdb8"} Jan 12 13:52:16 crc kubenswrapper[4580]: I0112 13:52:16.815306 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:16 crc kubenswrapper[4580]: I0112 13:52:16.936558 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29wc2"] Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.043926 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.188960 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config-secret\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.189694 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9zhn\" (UniqueName: \"kubernetes.io/projected/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-kube-api-access-f9zhn\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.189755 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ca-certs\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.189776 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.189823 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.189861 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-config-data\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.189884 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-temporary\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.190244 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ssh-key\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.190300 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-workdir\") pod \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\" (UID: \"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a\") " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.190868 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-config-data" (OuterVolumeSpecName: "config-data") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.194513 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.197238 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-kube-api-access-f9zhn" (OuterVolumeSpecName: "kube-api-access-f9zhn") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "kube-api-access-f9zhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.198160 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.198413 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.218988 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.219599 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.221635 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.238975 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" (UID: "8e031ef3-1afa-438b-8f95-cd63e4d5eb5a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293416 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9zhn\" (UniqueName: \"kubernetes.io/projected/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-kube-api-access-f9zhn\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293454 4580 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293496 4580 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293510 4580 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293522 4580 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293532 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293541 4580 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293553 4580 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.293565 4580 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e031ef3-1afa-438b-8f95-cd63e4d5eb5a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.310793 4580 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.395885 4580 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.766608 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8e031ef3-1afa-438b-8f95-cd63e4d5eb5a","Type":"ContainerDied","Data":"24553f7e59b2aaab036b7405a5929b2cac4dcccabb03468f44af974d0f04c2d1"} Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.766658 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.766676 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24553f7e59b2aaab036b7405a5929b2cac4dcccabb03468f44af974d0f04c2d1" Jan 12 13:52:18 crc kubenswrapper[4580]: I0112 13:52:18.766779 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-29wc2" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerName="registry-server" containerID="cri-o://5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7" gracePeriod=2 Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.214327 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.317618 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-catalog-content\") pod \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.317710 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2qgw\" (UniqueName: \"kubernetes.io/projected/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-kube-api-access-l2qgw\") pod \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.317744 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-utilities\") pod \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\" (UID: \"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b\") " Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.318641 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-utilities" (OuterVolumeSpecName: "utilities") pod "1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" (UID: "1ccaefc1-6b08-4f6f-9004-f5cf41c9274b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.324555 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-kube-api-access-l2qgw" (OuterVolumeSpecName: "kube-api-access-l2qgw") pod "1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" (UID: "1ccaefc1-6b08-4f6f-9004-f5cf41c9274b"). InnerVolumeSpecName "kube-api-access-l2qgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.359075 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" (UID: "1ccaefc1-6b08-4f6f-9004-f5cf41c9274b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.420249 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.420286 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2qgw\" (UniqueName: \"kubernetes.io/projected/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-kube-api-access-l2qgw\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.420301 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.776298 4580 generic.go:334] "Generic (PLEG): container finished" podID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerID="5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7" exitCode=0 Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.776342 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29wc2" event={"ID":"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b","Type":"ContainerDied","Data":"5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7"} Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.776372 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29wc2" event={"ID":"1ccaefc1-6b08-4f6f-9004-f5cf41c9274b","Type":"ContainerDied","Data":"bbec603b011b352218f38af40cfa57767022d2556495aba9ae0eaf0e9c737292"} Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.776383 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29wc2" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.776389 4580 scope.go:117] "RemoveContainer" containerID="5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.807392 4580 scope.go:117] "RemoveContainer" containerID="8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.812946 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29wc2"] Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.831181 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-29wc2"] Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.833097 4580 scope.go:117] "RemoveContainer" containerID="e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.875682 4580 scope.go:117] "RemoveContainer" containerID="5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7" Jan 12 13:52:19 crc kubenswrapper[4580]: E0112 13:52:19.876308 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7\": container with ID starting with 5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7 not found: ID does not exist" containerID="5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.876346 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7"} err="failed to get container status \"5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7\": rpc error: code = NotFound desc = could not find container \"5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7\": container with ID starting with 5b51f50851fc47bb46dd591c3ff2109dd2e9042edfd0c4ceaf90e99b5777f3e7 not found: ID does not exist" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.876377 4580 scope.go:117] "RemoveContainer" containerID="8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9" Jan 12 13:52:19 crc kubenswrapper[4580]: E0112 13:52:19.877425 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9\": container with ID starting with 8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9 not found: ID does not exist" containerID="8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.877501 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9"} err="failed to get container status \"8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9\": rpc error: code = NotFound desc = could not find container \"8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9\": container with ID starting with 8b94aeceb90a1ea296d090f3447e79b09b5e3b68112d079947158384de8b12d9 not found: ID does not exist" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.877544 4580 scope.go:117] "RemoveContainer" containerID="e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c" Jan 12 13:52:19 crc kubenswrapper[4580]: E0112 13:52:19.877975 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c\": container with ID starting with e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c not found: ID does not exist" containerID="e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c" Jan 12 13:52:19 crc kubenswrapper[4580]: I0112 13:52:19.878003 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c"} err="failed to get container status \"e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c\": rpc error: code = NotFound desc = could not find container \"e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c\": container with ID starting with e8b8bee63b38aef6f217e03a172224aed880acbb439033b45dd2c64a49e9834c not found: ID does not exist" Jan 12 13:52:21 crc kubenswrapper[4580]: I0112 13:52:21.293582 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" path="/var/lib/kubelet/pods/1ccaefc1-6b08-4f6f-9004-f5cf41c9274b/volumes" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.891304 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 12 13:52:25 crc kubenswrapper[4580]: E0112 13:52:25.892230 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" containerName="tempest-tests-tempest-tests-runner" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.892248 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" containerName="tempest-tests-tempest-tests-runner" Jan 12 13:52:25 crc kubenswrapper[4580]: E0112 13:52:25.892263 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerName="registry-server" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.892269 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerName="registry-server" Jan 12 13:52:25 crc kubenswrapper[4580]: E0112 13:52:25.892302 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerName="extract-utilities" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.892308 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerName="extract-utilities" Jan 12 13:52:25 crc kubenswrapper[4580]: E0112 13:52:25.892322 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerName="extract-content" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.892328 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerName="extract-content" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.892513 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e031ef3-1afa-438b-8f95-cd63e4d5eb5a" containerName="tempest-tests-tempest-tests-runner" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.892539 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ccaefc1-6b08-4f6f-9004-f5cf41c9274b" containerName="registry-server" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.893272 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.894799 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2tdfr" Jan 12 13:52:25 crc kubenswrapper[4580]: I0112 13:52:25.898837 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.059689 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be1a4134-b582-42ee-b8d3-145911d7bdec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.059858 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n79g\" (UniqueName: \"kubernetes.io/projected/be1a4134-b582-42ee-b8d3-145911d7bdec-kube-api-access-2n79g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be1a4134-b582-42ee-b8d3-145911d7bdec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.161463 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n79g\" (UniqueName: \"kubernetes.io/projected/be1a4134-b582-42ee-b8d3-145911d7bdec-kube-api-access-2n79g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be1a4134-b582-42ee-b8d3-145911d7bdec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.161574 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be1a4134-b582-42ee-b8d3-145911d7bdec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.161973 4580 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be1a4134-b582-42ee-b8d3-145911d7bdec\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.179949 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n79g\" (UniqueName: \"kubernetes.io/projected/be1a4134-b582-42ee-b8d3-145911d7bdec-kube-api-access-2n79g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be1a4134-b582-42ee-b8d3-145911d7bdec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.184962 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be1a4134-b582-42ee-b8d3-145911d7bdec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.219190 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.641330 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 12 13:52:26 crc kubenswrapper[4580]: I0112 13:52:26.849634 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be1a4134-b582-42ee-b8d3-145911d7bdec","Type":"ContainerStarted","Data":"4cbaeb48fd7be30faf5d05705ec538deb2b3164d16a54413083029e260dd4662"} Jan 12 13:52:28 crc kubenswrapper[4580]: I0112 13:52:28.880464 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be1a4134-b582-42ee-b8d3-145911d7bdec","Type":"ContainerStarted","Data":"26237a13a36a280be562518fa721680967b2d5f586fb0a04528fac9a5d2d8fa1"} Jan 12 13:52:28 crc kubenswrapper[4580]: I0112 13:52:28.895461 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.823732448 podStartE2EDuration="3.895445805s" podCreationTimestamp="2026-01-12 13:52:25 +0000 UTC" firstStartedPulling="2026-01-12 13:52:26.650377836 +0000 UTC m=+2745.694596527" lastFinishedPulling="2026-01-12 13:52:27.722091194 +0000 UTC m=+2746.766309884" observedRunningTime="2026-01-12 13:52:28.890486941 +0000 UTC m=+2747.934705632" watchObservedRunningTime="2026-01-12 13:52:28.895445805 +0000 UTC m=+2747.939664495" Jan 12 13:53:00 crc kubenswrapper[4580]: I0112 13:53:00.868423 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7ck5k"] Jan 12 13:53:00 crc kubenswrapper[4580]: I0112 13:53:00.870874 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:00 crc kubenswrapper[4580]: I0112 13:53:00.880040 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7ck5k"] Jan 12 13:53:00 crc kubenswrapper[4580]: I0112 13:53:00.938373 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-catalog-content\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:00 crc kubenswrapper[4580]: I0112 13:53:00.938418 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-utilities\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:00 crc kubenswrapper[4580]: I0112 13:53:00.938453 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rcls\" (UniqueName: \"kubernetes.io/projected/587aaece-0b29-4a79-b556-4499557894fa-kube-api-access-4rcls\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:01 crc kubenswrapper[4580]: I0112 13:53:01.041134 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-catalog-content\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:01 crc kubenswrapper[4580]: I0112 13:53:01.041565 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-utilities\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:01 crc kubenswrapper[4580]: I0112 13:53:01.041600 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rcls\" (UniqueName: \"kubernetes.io/projected/587aaece-0b29-4a79-b556-4499557894fa-kube-api-access-4rcls\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:01 crc kubenswrapper[4580]: I0112 13:53:01.041815 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-catalog-content\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:01 crc kubenswrapper[4580]: I0112 13:53:01.042194 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-utilities\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:01 crc kubenswrapper[4580]: I0112 13:53:01.061224 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rcls\" (UniqueName: \"kubernetes.io/projected/587aaece-0b29-4a79-b556-4499557894fa-kube-api-access-4rcls\") pod \"redhat-operators-7ck5k\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:01 crc kubenswrapper[4580]: I0112 13:53:01.189601 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:01 crc kubenswrapper[4580]: I0112 13:53:01.750453 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7ck5k"] Jan 12 13:53:02 crc kubenswrapper[4580]: I0112 13:53:02.182627 4580 generic.go:334] "Generic (PLEG): container finished" podID="587aaece-0b29-4a79-b556-4499557894fa" containerID="0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9" exitCode=0 Jan 12 13:53:02 crc kubenswrapper[4580]: I0112 13:53:02.182728 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7ck5k" event={"ID":"587aaece-0b29-4a79-b556-4499557894fa","Type":"ContainerDied","Data":"0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9"} Jan 12 13:53:02 crc kubenswrapper[4580]: I0112 13:53:02.183848 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7ck5k" event={"ID":"587aaece-0b29-4a79-b556-4499557894fa","Type":"ContainerStarted","Data":"c153f0d2f8a796b2e61352dabbac2f8bf0be8b71b78576fb1d01f9f622f3fde1"} Jan 12 13:53:04 crc kubenswrapper[4580]: I0112 13:53:04.203331 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7ck5k" event={"ID":"587aaece-0b29-4a79-b556-4499557894fa","Type":"ContainerStarted","Data":"2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40"} Jan 12 13:53:05 crc kubenswrapper[4580]: I0112 13:53:05.227026 4580 generic.go:334] "Generic (PLEG): container finished" podID="587aaece-0b29-4a79-b556-4499557894fa" containerID="2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40" exitCode=0 Jan 12 13:53:05 crc kubenswrapper[4580]: I0112 13:53:05.227154 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7ck5k" event={"ID":"587aaece-0b29-4a79-b556-4499557894fa","Type":"ContainerDied","Data":"2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40"} Jan 12 13:53:06 crc kubenswrapper[4580]: I0112 13:53:06.241494 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7ck5k" event={"ID":"587aaece-0b29-4a79-b556-4499557894fa","Type":"ContainerStarted","Data":"a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061"} Jan 12 13:53:06 crc kubenswrapper[4580]: I0112 13:53:06.264047 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7ck5k" podStartSLOduration=2.712710319 podStartE2EDuration="6.264015154s" podCreationTimestamp="2026-01-12 13:53:00 +0000 UTC" firstStartedPulling="2026-01-12 13:53:02.184483868 +0000 UTC m=+2781.228702557" lastFinishedPulling="2026-01-12 13:53:05.735788702 +0000 UTC m=+2784.780007392" observedRunningTime="2026-01-12 13:53:06.257426575 +0000 UTC m=+2785.301645265" watchObservedRunningTime="2026-01-12 13:53:06.264015154 +0000 UTC m=+2785.308233843" Jan 12 13:53:11 crc kubenswrapper[4580]: I0112 13:53:11.189693 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:11 crc kubenswrapper[4580]: I0112 13:53:11.190547 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:11 crc kubenswrapper[4580]: I0112 13:53:11.225279 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:11 crc kubenswrapper[4580]: I0112 13:53:11.336738 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:11 crc kubenswrapper[4580]: I0112 13:53:11.458061 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7ck5k"] Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.305596 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7ck5k" podUID="587aaece-0b29-4a79-b556-4499557894fa" containerName="registry-server" containerID="cri-o://a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061" gracePeriod=2 Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.706225 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.821798 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rcls\" (UniqueName: \"kubernetes.io/projected/587aaece-0b29-4a79-b556-4499557894fa-kube-api-access-4rcls\") pod \"587aaece-0b29-4a79-b556-4499557894fa\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.822224 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-utilities\") pod \"587aaece-0b29-4a79-b556-4499557894fa\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.822304 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-catalog-content\") pod \"587aaece-0b29-4a79-b556-4499557894fa\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.822886 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-utilities" (OuterVolumeSpecName: "utilities") pod "587aaece-0b29-4a79-b556-4499557894fa" (UID: "587aaece-0b29-4a79-b556-4499557894fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.828185 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587aaece-0b29-4a79-b556-4499557894fa-kube-api-access-4rcls" (OuterVolumeSpecName: "kube-api-access-4rcls") pod "587aaece-0b29-4a79-b556-4499557894fa" (UID: "587aaece-0b29-4a79-b556-4499557894fa"). InnerVolumeSpecName "kube-api-access-4rcls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.923270 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "587aaece-0b29-4a79-b556-4499557894fa" (UID: "587aaece-0b29-4a79-b556-4499557894fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.924594 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-catalog-content\") pod \"587aaece-0b29-4a79-b556-4499557894fa\" (UID: \"587aaece-0b29-4a79-b556-4499557894fa\") " Jan 12 13:53:13 crc kubenswrapper[4580]: W0112 13:53:13.924738 4580 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/587aaece-0b29-4a79-b556-4499557894fa/volumes/kubernetes.io~empty-dir/catalog-content Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.924772 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "587aaece-0b29-4a79-b556-4499557894fa" (UID: "587aaece-0b29-4a79-b556-4499557894fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.925770 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.925798 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587aaece-0b29-4a79-b556-4499557894fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:53:13 crc kubenswrapper[4580]: I0112 13:53:13.925816 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rcls\" (UniqueName: \"kubernetes.io/projected/587aaece-0b29-4a79-b556-4499557894fa-kube-api-access-4rcls\") on node \"crc\" DevicePath \"\"" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.318366 4580 generic.go:334] "Generic (PLEG): container finished" podID="587aaece-0b29-4a79-b556-4499557894fa" containerID="a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061" exitCode=0 Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.318437 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7ck5k" event={"ID":"587aaece-0b29-4a79-b556-4499557894fa","Type":"ContainerDied","Data":"a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061"} Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.318477 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7ck5k" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.318506 4580 scope.go:117] "RemoveContainer" containerID="a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.318487 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7ck5k" event={"ID":"587aaece-0b29-4a79-b556-4499557894fa","Type":"ContainerDied","Data":"c153f0d2f8a796b2e61352dabbac2f8bf0be8b71b78576fb1d01f9f622f3fde1"} Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.343037 4580 scope.go:117] "RemoveContainer" containerID="2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.351138 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7ck5k"] Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.359474 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7ck5k"] Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.380073 4580 scope.go:117] "RemoveContainer" containerID="0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.396828 4580 scope.go:117] "RemoveContainer" containerID="a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061" Jan 12 13:53:14 crc kubenswrapper[4580]: E0112 13:53:14.397265 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061\": container with ID starting with a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061 not found: ID does not exist" containerID="a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.397309 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061"} err="failed to get container status \"a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061\": rpc error: code = NotFound desc = could not find container \"a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061\": container with ID starting with a57eaf1b660be7613028711a5d89e0c0b271cce3b313283d8be2ca0a3f00f061 not found: ID does not exist" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.397336 4580 scope.go:117] "RemoveContainer" containerID="2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40" Jan 12 13:53:14 crc kubenswrapper[4580]: E0112 13:53:14.397643 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40\": container with ID starting with 2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40 not found: ID does not exist" containerID="2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.397682 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40"} err="failed to get container status \"2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40\": rpc error: code = NotFound desc = could not find container \"2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40\": container with ID starting with 2bc39702c7b27a6643090db4988c68ec8ab8511909434eb8364e81cad29d7a40 not found: ID does not exist" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.397699 4580 scope.go:117] "RemoveContainer" containerID="0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9" Jan 12 13:53:14 crc kubenswrapper[4580]: E0112 13:53:14.397980 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9\": container with ID starting with 0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9 not found: ID does not exist" containerID="0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9" Jan 12 13:53:14 crc kubenswrapper[4580]: I0112 13:53:14.398014 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9"} err="failed to get container status \"0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9\": rpc error: code = NotFound desc = could not find container \"0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9\": container with ID starting with 0e55eb8de340920060e16a96105ae8ae8fa83fc1892ce882306c65d4e34670d9 not found: ID does not exist" Jan 12 13:53:15 crc kubenswrapper[4580]: I0112 13:53:15.292187 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587aaece-0b29-4a79-b556-4499557894fa" path="/var/lib/kubelet/pods/587aaece-0b29-4a79-b556-4499557894fa/volumes" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.075335 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bjzgj/must-gather-7w9ft"] Jan 12 13:53:30 crc kubenswrapper[4580]: E0112 13:53:30.076258 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587aaece-0b29-4a79-b556-4499557894fa" containerName="registry-server" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.076272 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="587aaece-0b29-4a79-b556-4499557894fa" containerName="registry-server" Jan 12 13:53:30 crc kubenswrapper[4580]: E0112 13:53:30.076285 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587aaece-0b29-4a79-b556-4499557894fa" containerName="extract-content" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.076290 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="587aaece-0b29-4a79-b556-4499557894fa" containerName="extract-content" Jan 12 13:53:30 crc kubenswrapper[4580]: E0112 13:53:30.076306 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587aaece-0b29-4a79-b556-4499557894fa" containerName="extract-utilities" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.076313 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="587aaece-0b29-4a79-b556-4499557894fa" containerName="extract-utilities" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.076484 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="587aaece-0b29-4a79-b556-4499557894fa" containerName="registry-server" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.077400 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.079635 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bjzgj"/"openshift-service-ca.crt" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.081807 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bjzgj"/"kube-root-ca.crt" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.089633 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bjzgj/must-gather-7w9ft"] Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.096560 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ef51343-57ca-4206-a6c7-e1860f15b3d7-must-gather-output\") pod \"must-gather-7w9ft\" (UID: \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\") " pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.097839 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xplq2\" (UniqueName: \"kubernetes.io/projected/8ef51343-57ca-4206-a6c7-e1860f15b3d7-kube-api-access-xplq2\") pod \"must-gather-7w9ft\" (UID: \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\") " pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.199353 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xplq2\" (UniqueName: \"kubernetes.io/projected/8ef51343-57ca-4206-a6c7-e1860f15b3d7-kube-api-access-xplq2\") pod \"must-gather-7w9ft\" (UID: \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\") " pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.199437 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ef51343-57ca-4206-a6c7-e1860f15b3d7-must-gather-output\") pod \"must-gather-7w9ft\" (UID: \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\") " pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.199944 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ef51343-57ca-4206-a6c7-e1860f15b3d7-must-gather-output\") pod \"must-gather-7w9ft\" (UID: \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\") " pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.215274 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xplq2\" (UniqueName: \"kubernetes.io/projected/8ef51343-57ca-4206-a6c7-e1860f15b3d7-kube-api-access-xplq2\") pod \"must-gather-7w9ft\" (UID: \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\") " pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.398027 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:53:30 crc kubenswrapper[4580]: I0112 13:53:30.810405 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bjzgj/must-gather-7w9ft"] Jan 12 13:53:31 crc kubenswrapper[4580]: I0112 13:53:31.471441 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" event={"ID":"8ef51343-57ca-4206-a6c7-e1860f15b3d7","Type":"ContainerStarted","Data":"e341e118af21514b08c5563489ebc60e6d2972b334779b44ecb98b5591a97ac8"} Jan 12 13:53:39 crc kubenswrapper[4580]: I0112 13:53:39.579790 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" event={"ID":"8ef51343-57ca-4206-a6c7-e1860f15b3d7","Type":"ContainerStarted","Data":"d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5"} Jan 12 13:53:39 crc kubenswrapper[4580]: I0112 13:53:39.580646 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" event={"ID":"8ef51343-57ca-4206-a6c7-e1860f15b3d7","Type":"ContainerStarted","Data":"506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864"} Jan 12 13:53:39 crc kubenswrapper[4580]: I0112 13:53:39.600725 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" podStartSLOduration=1.378038288 podStartE2EDuration="9.600700195s" podCreationTimestamp="2026-01-12 13:53:30 +0000 UTC" firstStartedPulling="2026-01-12 13:53:30.821536681 +0000 UTC m=+2809.865755371" lastFinishedPulling="2026-01-12 13:53:39.044198588 +0000 UTC m=+2818.088417278" observedRunningTime="2026-01-12 13:53:39.59544816 +0000 UTC m=+2818.639666850" watchObservedRunningTime="2026-01-12 13:53:39.600700195 +0000 UTC m=+2818.644918885" Jan 12 13:53:41 crc kubenswrapper[4580]: E0112 13:53:41.169361 4580 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.25.161:50416->192.168.25.161:44257: read tcp 192.168.25.161:50416->192.168.25.161:44257: read: connection reset by peer Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.333938 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-9hfss"] Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.335588 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.337827 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bjzgj"/"default-dockercfg-rlgv8" Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.466208 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ba5933f1-e20e-4928-8111-358afaa3636c-host\") pod \"crc-debug-9hfss\" (UID: \"ba5933f1-e20e-4928-8111-358afaa3636c\") " pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.466654 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8llf\" (UniqueName: \"kubernetes.io/projected/ba5933f1-e20e-4928-8111-358afaa3636c-kube-api-access-q8llf\") pod \"crc-debug-9hfss\" (UID: \"ba5933f1-e20e-4928-8111-358afaa3636c\") " pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.568449 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8llf\" (UniqueName: \"kubernetes.io/projected/ba5933f1-e20e-4928-8111-358afaa3636c-kube-api-access-q8llf\") pod \"crc-debug-9hfss\" (UID: \"ba5933f1-e20e-4928-8111-358afaa3636c\") " pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.568834 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ba5933f1-e20e-4928-8111-358afaa3636c-host\") pod \"crc-debug-9hfss\" (UID: \"ba5933f1-e20e-4928-8111-358afaa3636c\") " pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.568976 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ba5933f1-e20e-4928-8111-358afaa3636c-host\") pod \"crc-debug-9hfss\" (UID: \"ba5933f1-e20e-4928-8111-358afaa3636c\") " pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.587941 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8llf\" (UniqueName: \"kubernetes.io/projected/ba5933f1-e20e-4928-8111-358afaa3636c-kube-api-access-q8llf\") pod \"crc-debug-9hfss\" (UID: \"ba5933f1-e20e-4928-8111-358afaa3636c\") " pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:53:42 crc kubenswrapper[4580]: I0112 13:53:42.650957 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:53:42 crc kubenswrapper[4580]: W0112 13:53:42.676084 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba5933f1_e20e_4928_8111_358afaa3636c.slice/crio-50c3ce8043de763e93a3df8e77efcf02bc34538a2aabf351cac3b39a291280bf WatchSource:0}: Error finding container 50c3ce8043de763e93a3df8e77efcf02bc34538a2aabf351cac3b39a291280bf: Status 404 returned error can't find the container with id 50c3ce8043de763e93a3df8e77efcf02bc34538a2aabf351cac3b39a291280bf Jan 12 13:53:43 crc kubenswrapper[4580]: I0112 13:53:43.611000 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/crc-debug-9hfss" event={"ID":"ba5933f1-e20e-4928-8111-358afaa3636c","Type":"ContainerStarted","Data":"50c3ce8043de763e93a3df8e77efcf02bc34538a2aabf351cac3b39a291280bf"} Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.636451 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75699d8f8b-jqxcw_722ce4c4-5517-412c-b3c4-aafc83db85dc/barbican-api-log/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.644335 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75699d8f8b-jqxcw_722ce4c4-5517-412c-b3c4-aafc83db85dc/barbican-api/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.675093 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bfdbc7dc6-bk5g5_1b072324-9c35-458a-8d4a-1759b9ed2883/barbican-keystone-listener-log/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.680646 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bfdbc7dc6-bk5g5_1b072324-9c35-458a-8d4a-1759b9ed2883/barbican-keystone-listener/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.695302 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-655fc5cf45-jcmxp_eed9373c-ecc9-4510-bb6b-9171b70a9088/barbican-worker-log/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.700205 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-655fc5cf45-jcmxp_eed9373c-ecc9-4510-bb6b-9171b70a9088/barbican-worker/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.735825 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j_2a4039fd-f1bf-4fdd-881a-192b4b4c8a35/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.761390 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d8f9e6c7-d6cd-496f-b009-6bb336d25ebe/ceilometer-central-agent/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.787666 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d8f9e6c7-d6cd-496f-b009-6bb336d25ebe/ceilometer-notification-agent/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.793034 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d8f9e6c7-d6cd-496f-b009-6bb336d25ebe/sg-core/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.801399 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d8f9e6c7-d6cd-496f-b009-6bb336d25ebe/proxy-httpd/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.820750 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_977708c4-8759-44d1-8d90-6226077e8044/cinder-api-log/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.856220 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_977708c4-8759-44d1-8d90-6226077e8044/cinder-api/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.887036 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0f0d4cc9-9655-43d1-b588-ae5326765c36/cinder-scheduler/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.927852 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0f0d4cc9-9655-43d1-b588-ae5326765c36/probe/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.954906 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr_b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:53:44 crc kubenswrapper[4580]: I0112 13:53:44.979366 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw_bd995c62-9850-41cf-91c1-aa47ac294147/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.021667 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-xlkwf_5eee677a-4caa-4107-a64f-cee518dfed89/dnsmasq-dns/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.027153 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-xlkwf_5eee677a-4caa-4107-a64f-cee518dfed89/init/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.048761 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8krps_3dce5050-a090-4782-a068-efafd359455a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.062304 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3e7614df-e73b-47f5-b7f0-d942ea24c4f0/glance-log/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.078666 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3e7614df-e73b-47f5-b7f0-d942ea24c4f0/glance-httpd/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.088440 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_026b9966-ae00-4f6a-be8d-bb1d9fffbef3/glance-log/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.108580 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_026b9966-ae00-4f6a-be8d-bb1d9fffbef3/glance-httpd/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.452822 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8699b457dd-z2fkt_92d059e4-ff2b-4ecc-ae14-6367d54e720f/horizon-log/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.544376 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8699b457dd-z2fkt_92d059e4-ff2b-4ecc-ae14-6367d54e720f/horizon/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.569883 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4_bf2989c5-6b0d-458d-98c5-7849febf7787/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.588452 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6cfvt_340ac203-3af7-4abd-b75c-bf97009c24e9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.734795 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-696c64b546-cw888_9fad586a-c41d-44da-8144-75dcb27fe7e9/keystone-api/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.746134 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4651196b-71ee-434b-bb63-e77f16c744e4/kube-state-metrics/0.log" Jan 12 13:53:45 crc kubenswrapper[4580]: I0112 13:53:45.777194 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-krb6r_7537a508-8a6d-43df-8d76-a845464edfa9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:53:54 crc kubenswrapper[4580]: I0112 13:53:54.715713 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/crc-debug-9hfss" event={"ID":"ba5933f1-e20e-4928-8111-358afaa3636c","Type":"ContainerStarted","Data":"44742b7896610a83db33b3d710ecb6d8aa9f95540e4475a07845eec6544baf24"} Jan 12 13:53:54 crc kubenswrapper[4580]: I0112 13:53:54.738509 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bjzgj/crc-debug-9hfss" podStartSLOduration=1.277883336 podStartE2EDuration="12.738488851s" podCreationTimestamp="2026-01-12 13:53:42 +0000 UTC" firstStartedPulling="2026-01-12 13:53:42.678001663 +0000 UTC m=+2821.722220353" lastFinishedPulling="2026-01-12 13:53:54.138607178 +0000 UTC m=+2833.182825868" observedRunningTime="2026-01-12 13:53:54.730068679 +0000 UTC m=+2833.774287370" watchObservedRunningTime="2026-01-12 13:53:54.738488851 +0000 UTC m=+2833.782707542" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.054221 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0c2b68c0-cf75-4b38-b7f5-c58b9f52e818/memcached/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.134161 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c66d9fb7c-tgbgc_d56cc382-ea8e-4cea-829a-80335a2b71c9/neutron-api/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.183379 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c66d9fb7c-tgbgc_d56cc382-ea8e-4cea-829a-80335a2b71c9/neutron-httpd/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.205960 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t_6824de1f-1f07-45a9-b65d-6d1aadc863db/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.389276 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_af33dae1-afd6-4b08-a507-64373650c025/nova-api-log/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.632209 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_af33dae1-afd6-4b08-a507-64373650c025/nova-api-api/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.716798 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_621c9246-ea68-42b4-b799-961af70ca4f5/nova-cell0-conductor-conductor/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.790611 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5473becf-161f-49fe-86c0-079d4a9d80dc/nova-cell1-conductor-conductor/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.839091 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f/nova-cell1-novncproxy-novncproxy/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.901883 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pdcpd_2b14f1aa-0c20-4db8-9a42-8abf7baf0140/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:00 crc kubenswrapper[4580]: I0112 13:54:00.975540 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_35b40e0a-79b7-4ca2-8aa2-f6de40c60088/nova-metadata-log/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.510950 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_35b40e0a-79b7-4ca2-8aa2-f6de40c60088/nova-metadata-metadata/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.608409 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2a896fc4-1b0f-4186-a168-437fd8a099ea/nova-scheduler-scheduler/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.629429 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29452d40-93df-4c9f-9d79-70fbf3907de1/galera/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.639256 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29452d40-93df-4c9f-9d79-70fbf3907de1/mysql-bootstrap/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.655916 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ceae97e-0cf6-4019-90ba-931df3f6dbed/galera/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.667152 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ceae97e-0cf6-4019-90ba-931df3f6dbed/mysql-bootstrap/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.673275 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d04e360b-50ce-4cb6-9168-b7592de2d83e/openstackclient/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.683000 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-24mw4_8791a7b2-1c8a-4551-94d2-379d8a7aa153/openstack-network-exporter/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.693930 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-66wld_b20197ec-909c-4343-a0ed-e99b88ea6f83/ovsdb-server/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.705228 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-66wld_b20197ec-909c-4343-a0ed-e99b88ea6f83/ovs-vswitchd/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.714161 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-66wld_b20197ec-909c-4343-a0ed-e99b88ea6f83/ovsdb-server-init/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.737486 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-tbpzb_29dabf99-ffd5-4d31-b9e5-b10e192f239d/ovn-controller/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.769011 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-74485_06c2f69b-a49e-42fb-9532-837b04bdff07/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.777188 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_11cab738-4c7c-4949-9a8c-50b8c1bca314/ovn-northd/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.782240 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_11cab738-4c7c-4949-9a8c-50b8c1bca314/openstack-network-exporter/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.802029 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_396e4fc0-cb2e-4543-b1ae-d61eec6a365a/ovsdbserver-nb/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.806891 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_396e4fc0-cb2e-4543-b1ae-d61eec6a365a/openstack-network-exporter/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.821757 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a6c58bf4-8891-45e6-9be6-a3176eefbc14/ovsdbserver-sb/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.825050 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a6c58bf4-8891-45e6-9be6-a3176eefbc14/openstack-network-exporter/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.884031 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffb74c678-h5ddl_89dcb711-9d18-46e9-9f17-280f0f4c0e1a/placement-log/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.925234 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffb74c678-h5ddl_89dcb711-9d18-46e9-9f17-280f0f4c0e1a/placement-api/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.944110 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c7dd413-5eac-4da3-ba06-0917a412956d/rabbitmq/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.948177 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c7dd413-5eac-4da3-ba06-0917a412956d/setup-container/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.970300 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_45d6b817-38ed-4b91-b375-d0b358eaab0b/rabbitmq/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.974194 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_45d6b817-38ed-4b91-b375-d0b358eaab0b/setup-container/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.989942 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7_2e6e07b6-c923-4d65-8bda-8fb27915bb72/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:01 crc kubenswrapper[4580]: I0112 13:54:01.997653 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xkg7q_007f6af6-a125-443f-a2ff-1b1322aefca5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.007044 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8_16ecca65-5485-450e-8a2b-06f5e3558fc6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.017746 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nbqdl_0434e0b6-16ec-4821-b1b0-c823fc51a965/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.029349 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9cvjw_212e2cae-eea9-4f9c-a1f0-87708f00ab9a/ssh-known-hosts-edpm-deployment/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.096590 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f676d8c57-qq454_8d448ad1-2ef8-48cd-8e3c-3e81e82da286/proxy-httpd/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.110125 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f676d8c57-qq454_8d448ad1-2ef8-48cd-8e3c-3e81e82da286/proxy-server/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.117714 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cplpv_91a2e8de-56e6-41e5-a8fa-a576e8970ebd/swift-ring-rebalance/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.135345 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/account-server/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.153170 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/account-replicator/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.167913 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/account-auditor/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.179691 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/account-reaper/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.186618 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/container-server/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.202581 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/container-replicator/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.206348 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/container-auditor/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.213602 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/container-updater/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.220092 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-server/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.233892 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-replicator/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.245495 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-auditor/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.255938 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-updater/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.268402 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-expirer/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.273330 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/rsync/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.282023 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/swift-recon-cron/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.346685 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv_1e5c1e6d-1fc0-4199-ae0d-67c093f94192/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.368243 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_8e031ef3-1afa-438b-8f95-cd63e4d5eb5a/tempest-tests-tempest-tests-runner/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.373065 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_be1a4134-b582-42ee-b8d3-145911d7bdec/test-operator-logs-container/0.log" Jan 12 13:54:02 crc kubenswrapper[4580]: I0112 13:54:02.385787 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7_553cda0e-1691-4748-8a47-d34d8600ea2e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.754416 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c697f55f8-69mz9_7ed21cbb-5825-4538-bfb6-74f895189d83/manager/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.793205 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-4b7c9_63a3c1f8-84b5-4648-9a74-bc1e980d5a57/manager/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.803464 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-plhvp_cbfff7ce-c184-4dee-94d5-c6ee41fc2b75/manager/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.812543 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/extract/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.819638 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/util/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.826529 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/pull/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.905417 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-75b858dccc-nr2g4_b3716289-2aa2-4e39-b8db-7980564c976e/manager/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.912736 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6cd7bcb4bf-nvbml_8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01/manager/0.log" Jan 12 13:54:05 crc kubenswrapper[4580]: I0112 13:54:05.938264 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-75cb9467dc-r22fp_218c7ab4-85b0-4609-87e6-35d51283e5e0/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.238030 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-2sg8z_1135f51b-1f4e-4866-bb7d-728be53f5be7/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.252023 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-plnl2_0286f995-6c82-4417-8a67-91b5e261a211/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.292277 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/controller/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.301257 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/kube-rbac-proxy/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.320452 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/controller/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.330468 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fckr8_726a74db-a499-4c38-8258-b711bc0dc30b/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.340250 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6684f856f9-w2xhg_bf14de2d-3f35-4c32-905c-0a133a4fbafe/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.384769 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8fxxj_87188751-ba97-4f25-ba2c-70514594cb4a/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.455312 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ckfhs_2d0f98f6-67ec-4253-a344-8aa185679126/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.572524 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5977959f9c-sgg8q_eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.584816 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nbzhm_7209eb4d-53dc-4c30-9b80-8863acbea5a6/manager/0.log" Jan 12 13:54:06 crc kubenswrapper[4580]: I0112 13:54:06.599578 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-654686dcb9z5ths_ccb61890-3cf7-45aa-974c-693f0d14c14a/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.299381 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6659c7dc85-4p8jr_87237fc1-15cd-4dd9-bcfe-5a334d366896/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.457137 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf8b477cb-hwd8t_bca66d95-723d-4cd6-bc4c-1a0c564606f3/operator/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.466996 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.476961 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/reloader/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.485436 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr-metrics/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.494671 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.503297 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-248h5_c0f0a657-34a2-4619-b992-64ab017e6ecb/registry-server/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.505155 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy-frr/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.514685 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-frr-files/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.524717 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-reloader/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.531422 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-metrics/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.540749 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-pkj9q_83e9a47a-80d6-4d28-ae2d-da27e069932f/frr-k8s-webhook-server/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.554903 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-cf664874d-vznwd_742c889f-d87d-4d61-82f8-2fa3ffc3d6b2/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.570830 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-797bb7bf75-nrxgs_9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.581277 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66dd5b5c84-2pdlb_cc9a79e4-90bc-4e70-afac-8b20ec13504f/webhook-server/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.582403 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78c6bccb56-mggmh_f50c1909-7ba3-4d92-9e4e-2cbd2602e340/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.611131 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-p4m8m_520c9385-c952-45a9-b1ce-2ad913758239/operator/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.657290 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6469d85bcb-smn7v_6a4af572-980a-4c9b-8d01-df30e894dcda/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.753305 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74bd5457c5-95bcj_ed127163-4a57-4b95-9dd7-4c856bd3d126/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.764831 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-698b874cb5-4v5jb_00ccc719-ee01-4ff4-934b-6e6fbadaa57c/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.775115 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-2z5v7_56a7e345-fce1-44a5-aab4-8d82293bd5ee/manager/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.963982 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/speaker/0.log" Jan 12 13:54:08 crc kubenswrapper[4580]: I0112 13:54:08.969765 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/kube-rbac-proxy/0.log" Jan 12 13:54:12 crc kubenswrapper[4580]: I0112 13:54:12.627561 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jbnkd_3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9/control-plane-machine-set-operator/0.log" Jan 12 13:54:12 crc kubenswrapper[4580]: I0112 13:54:12.639550 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89mg9_bdbff407-68ae-456c-b67e-40d0e47fba7b/kube-rbac-proxy/0.log" Jan 12 13:54:12 crc kubenswrapper[4580]: I0112 13:54:12.650723 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89mg9_bdbff407-68ae-456c-b67e-40d0e47fba7b/machine-api-operator/0.log" Jan 12 13:54:16 crc kubenswrapper[4580]: I0112 13:54:16.949742 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:54:16 crc kubenswrapper[4580]: I0112 13:54:16.950273 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:54:28 crc kubenswrapper[4580]: I0112 13:54:28.059325 4580 generic.go:334] "Generic (PLEG): container finished" podID="ba5933f1-e20e-4928-8111-358afaa3636c" containerID="44742b7896610a83db33b3d710ecb6d8aa9f95540e4475a07845eec6544baf24" exitCode=0 Jan 12 13:54:28 crc kubenswrapper[4580]: I0112 13:54:28.059421 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/crc-debug-9hfss" event={"ID":"ba5933f1-e20e-4928-8111-358afaa3636c","Type":"ContainerDied","Data":"44742b7896610a83db33b3d710ecb6d8aa9f95540e4475a07845eec6544baf24"} Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.147277 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.180466 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-9hfss"] Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.187522 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-9hfss"] Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.241399 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ba5933f1-e20e-4928-8111-358afaa3636c-host\") pod \"ba5933f1-e20e-4928-8111-358afaa3636c\" (UID: \"ba5933f1-e20e-4928-8111-358afaa3636c\") " Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.241448 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8llf\" (UniqueName: \"kubernetes.io/projected/ba5933f1-e20e-4928-8111-358afaa3636c-kube-api-access-q8llf\") pod \"ba5933f1-e20e-4928-8111-358afaa3636c\" (UID: \"ba5933f1-e20e-4928-8111-358afaa3636c\") " Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.241499 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba5933f1-e20e-4928-8111-358afaa3636c-host" (OuterVolumeSpecName: "host") pod "ba5933f1-e20e-4928-8111-358afaa3636c" (UID: "ba5933f1-e20e-4928-8111-358afaa3636c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.241928 4580 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ba5933f1-e20e-4928-8111-358afaa3636c-host\") on node \"crc\" DevicePath \"\"" Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.246862 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5933f1-e20e-4928-8111-358afaa3636c-kube-api-access-q8llf" (OuterVolumeSpecName: "kube-api-access-q8llf") pod "ba5933f1-e20e-4928-8111-358afaa3636c" (UID: "ba5933f1-e20e-4928-8111-358afaa3636c"). InnerVolumeSpecName "kube-api-access-q8llf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.290958 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5933f1-e20e-4928-8111-358afaa3636c" path="/var/lib/kubelet/pods/ba5933f1-e20e-4928-8111-358afaa3636c/volumes" Jan 12 13:54:29 crc kubenswrapper[4580]: I0112 13:54:29.344171 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8llf\" (UniqueName: \"kubernetes.io/projected/ba5933f1-e20e-4928-8111-358afaa3636c-kube-api-access-q8llf\") on node \"crc\" DevicePath \"\"" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.081302 4580 scope.go:117] "RemoveContainer" containerID="44742b7896610a83db33b3d710ecb6d8aa9f95540e4475a07845eec6544baf24" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.081369 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-9hfss" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.334792 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-7bpwh"] Jan 12 13:54:30 crc kubenswrapper[4580]: E0112 13:54:30.335175 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5933f1-e20e-4928-8111-358afaa3636c" containerName="container-00" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.335190 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5933f1-e20e-4928-8111-358afaa3636c" containerName="container-00" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.335416 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5933f1-e20e-4928-8111-358afaa3636c" containerName="container-00" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.336002 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.337782 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bjzgj"/"default-dockercfg-rlgv8" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.465718 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2lzs\" (UniqueName: \"kubernetes.io/projected/41df5ae4-9dd4-4591-b913-1f627781a686-kube-api-access-k2lzs\") pod \"crc-debug-7bpwh\" (UID: \"41df5ae4-9dd4-4591-b913-1f627781a686\") " pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.465775 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41df5ae4-9dd4-4591-b913-1f627781a686-host\") pod \"crc-debug-7bpwh\" (UID: \"41df5ae4-9dd4-4591-b913-1f627781a686\") " pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.568154 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2lzs\" (UniqueName: \"kubernetes.io/projected/41df5ae4-9dd4-4591-b913-1f627781a686-kube-api-access-k2lzs\") pod \"crc-debug-7bpwh\" (UID: \"41df5ae4-9dd4-4591-b913-1f627781a686\") " pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.568473 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41df5ae4-9dd4-4591-b913-1f627781a686-host\") pod \"crc-debug-7bpwh\" (UID: \"41df5ae4-9dd4-4591-b913-1f627781a686\") " pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.568589 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41df5ae4-9dd4-4591-b913-1f627781a686-host\") pod \"crc-debug-7bpwh\" (UID: \"41df5ae4-9dd4-4591-b913-1f627781a686\") " pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.589476 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2lzs\" (UniqueName: \"kubernetes.io/projected/41df5ae4-9dd4-4591-b913-1f627781a686-kube-api-access-k2lzs\") pod \"crc-debug-7bpwh\" (UID: \"41df5ae4-9dd4-4591-b913-1f627781a686\") " pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:30 crc kubenswrapper[4580]: I0112 13:54:30.649397 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:31 crc kubenswrapper[4580]: I0112 13:54:31.093721 4580 generic.go:334] "Generic (PLEG): container finished" podID="41df5ae4-9dd4-4591-b913-1f627781a686" containerID="fa067f1525f8660db7774e8c4efc0bd1ac1bba9803e93fad6026b14795cbfda8" exitCode=0 Jan 12 13:54:31 crc kubenswrapper[4580]: I0112 13:54:31.093860 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" event={"ID":"41df5ae4-9dd4-4591-b913-1f627781a686","Type":"ContainerDied","Data":"fa067f1525f8660db7774e8c4efc0bd1ac1bba9803e93fad6026b14795cbfda8"} Jan 12 13:54:31 crc kubenswrapper[4580]: I0112 13:54:31.094191 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" event={"ID":"41df5ae4-9dd4-4591-b913-1f627781a686","Type":"ContainerStarted","Data":"c5ffa3948c7a33d8bfe589c61464afa821ab08ee0f92c5524c23665bf521ae42"} Jan 12 13:54:31 crc kubenswrapper[4580]: I0112 13:54:31.615879 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-7bpwh"] Jan 12 13:54:31 crc kubenswrapper[4580]: I0112 13:54:31.622008 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-7bpwh"] Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.180308 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.200388 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2lzs\" (UniqueName: \"kubernetes.io/projected/41df5ae4-9dd4-4591-b913-1f627781a686-kube-api-access-k2lzs\") pod \"41df5ae4-9dd4-4591-b913-1f627781a686\" (UID: \"41df5ae4-9dd4-4591-b913-1f627781a686\") " Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.200456 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41df5ae4-9dd4-4591-b913-1f627781a686-host\") pod \"41df5ae4-9dd4-4591-b913-1f627781a686\" (UID: \"41df5ae4-9dd4-4591-b913-1f627781a686\") " Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.200507 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41df5ae4-9dd4-4591-b913-1f627781a686-host" (OuterVolumeSpecName: "host") pod "41df5ae4-9dd4-4591-b913-1f627781a686" (UID: "41df5ae4-9dd4-4591-b913-1f627781a686"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.201287 4580 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41df5ae4-9dd4-4591-b913-1f627781a686-host\") on node \"crc\" DevicePath \"\"" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.206616 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41df5ae4-9dd4-4591-b913-1f627781a686-kube-api-access-k2lzs" (OuterVolumeSpecName: "kube-api-access-k2lzs") pod "41df5ae4-9dd4-4591-b913-1f627781a686" (UID: "41df5ae4-9dd4-4591-b913-1f627781a686"). InnerVolumeSpecName "kube-api-access-k2lzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.303074 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2lzs\" (UniqueName: \"kubernetes.io/projected/41df5ae4-9dd4-4591-b913-1f627781a686-kube-api-access-k2lzs\") on node \"crc\" DevicePath \"\"" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.751799 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-7r7jz"] Jan 12 13:54:32 crc kubenswrapper[4580]: E0112 13:54:32.752243 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41df5ae4-9dd4-4591-b913-1f627781a686" containerName="container-00" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.752258 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="41df5ae4-9dd4-4591-b913-1f627781a686" containerName="container-00" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.752453 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="41df5ae4-9dd4-4591-b913-1f627781a686" containerName="container-00" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.753178 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.813023 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmkg\" (UniqueName: \"kubernetes.io/projected/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-kube-api-access-cdmkg\") pod \"crc-debug-7r7jz\" (UID: \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\") " pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.813086 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-host\") pod \"crc-debug-7r7jz\" (UID: \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\") " pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.915688 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmkg\" (UniqueName: \"kubernetes.io/projected/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-kube-api-access-cdmkg\") pod \"crc-debug-7r7jz\" (UID: \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\") " pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.915760 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-host\") pod \"crc-debug-7r7jz\" (UID: \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\") " pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.915904 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-host\") pod \"crc-debug-7r7jz\" (UID: \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\") " pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:32 crc kubenswrapper[4580]: I0112 13:54:32.932715 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmkg\" (UniqueName: \"kubernetes.io/projected/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-kube-api-access-cdmkg\") pod \"crc-debug-7r7jz\" (UID: \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\") " pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:33 crc kubenswrapper[4580]: I0112 13:54:33.066601 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:33 crc kubenswrapper[4580]: W0112 13:54:33.092817 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod262028e8_0e7a_43a2_9b49_9a24e7fcd2df.slice/crio-684e1f1a38bd37b22a789c3275f45086865a5a2883e2a1fb074bc7df9e9cdc3d WatchSource:0}: Error finding container 684e1f1a38bd37b22a789c3275f45086865a5a2883e2a1fb074bc7df9e9cdc3d: Status 404 returned error can't find the container with id 684e1f1a38bd37b22a789c3275f45086865a5a2883e2a1fb074bc7df9e9cdc3d Jan 12 13:54:33 crc kubenswrapper[4580]: I0112 13:54:33.112479 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" event={"ID":"262028e8-0e7a-43a2-9b49-9a24e7fcd2df","Type":"ContainerStarted","Data":"684e1f1a38bd37b22a789c3275f45086865a5a2883e2a1fb074bc7df9e9cdc3d"} Jan 12 13:54:33 crc kubenswrapper[4580]: I0112 13:54:33.114256 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ffa3948c7a33d8bfe589c61464afa821ab08ee0f92c5524c23665bf521ae42" Jan 12 13:54:33 crc kubenswrapper[4580]: I0112 13:54:33.114424 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-7bpwh" Jan 12 13:54:33 crc kubenswrapper[4580]: I0112 13:54:33.294383 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41df5ae4-9dd4-4591-b913-1f627781a686" path="/var/lib/kubelet/pods/41df5ae4-9dd4-4591-b913-1f627781a686/volumes" Jan 12 13:54:34 crc kubenswrapper[4580]: I0112 13:54:34.124355 4580 generic.go:334] "Generic (PLEG): container finished" podID="262028e8-0e7a-43a2-9b49-9a24e7fcd2df" containerID="402b865a83e9117fe563de489b0ea6213da90668cab12605d2f3d532b1933513" exitCode=0 Jan 12 13:54:34 crc kubenswrapper[4580]: I0112 13:54:34.124474 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" event={"ID":"262028e8-0e7a-43a2-9b49-9a24e7fcd2df","Type":"ContainerDied","Data":"402b865a83e9117fe563de489b0ea6213da90668cab12605d2f3d532b1933513"} Jan 12 13:54:34 crc kubenswrapper[4580]: I0112 13:54:34.167521 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-7r7jz"] Jan 12 13:54:34 crc kubenswrapper[4580]: I0112 13:54:34.175161 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bjzgj/crc-debug-7r7jz"] Jan 12 13:54:34 crc kubenswrapper[4580]: I0112 13:54:34.635831 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-c9fsw_e8e0f177-af2a-4975-a047-6d66bcd9b474/cert-manager-controller/0.log" Jan 12 13:54:34 crc kubenswrapper[4580]: I0112 13:54:34.648637 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-56nml_5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5/cert-manager-cainjector/0.log" Jan 12 13:54:34 crc kubenswrapper[4580]: I0112 13:54:34.656905 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dkts4_56ef0925-27e4-4a8f-9a56-3e31c7176270/cert-manager-webhook/0.log" Jan 12 13:54:35 crc kubenswrapper[4580]: I0112 13:54:35.222913 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:35 crc kubenswrapper[4580]: I0112 13:54:35.266500 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-host\") pod \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\" (UID: \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\") " Jan 12 13:54:35 crc kubenswrapper[4580]: I0112 13:54:35.266618 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-host" (OuterVolumeSpecName: "host") pod "262028e8-0e7a-43a2-9b49-9a24e7fcd2df" (UID: "262028e8-0e7a-43a2-9b49-9a24e7fcd2df"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:54:35 crc kubenswrapper[4580]: I0112 13:54:35.266796 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdmkg\" (UniqueName: \"kubernetes.io/projected/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-kube-api-access-cdmkg\") pod \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\" (UID: \"262028e8-0e7a-43a2-9b49-9a24e7fcd2df\") " Jan 12 13:54:35 crc kubenswrapper[4580]: I0112 13:54:35.267617 4580 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-host\") on node \"crc\" DevicePath \"\"" Jan 12 13:54:35 crc kubenswrapper[4580]: I0112 13:54:35.273225 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-kube-api-access-cdmkg" (OuterVolumeSpecName: "kube-api-access-cdmkg") pod "262028e8-0e7a-43a2-9b49-9a24e7fcd2df" (UID: "262028e8-0e7a-43a2-9b49-9a24e7fcd2df"). InnerVolumeSpecName "kube-api-access-cdmkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:54:35 crc kubenswrapper[4580]: I0112 13:54:35.290424 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="262028e8-0e7a-43a2-9b49-9a24e7fcd2df" path="/var/lib/kubelet/pods/262028e8-0e7a-43a2-9b49-9a24e7fcd2df/volumes" Jan 12 13:54:35 crc kubenswrapper[4580]: I0112 13:54:35.369886 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdmkg\" (UniqueName: \"kubernetes.io/projected/262028e8-0e7a-43a2-9b49-9a24e7fcd2df-kube-api-access-cdmkg\") on node \"crc\" DevicePath \"\"" Jan 12 13:54:36 crc kubenswrapper[4580]: I0112 13:54:36.145066 4580 scope.go:117] "RemoveContainer" containerID="402b865a83e9117fe563de489b0ea6213da90668cab12605d2f3d532b1933513" Jan 12 13:54:36 crc kubenswrapper[4580]: I0112 13:54:36.145308 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/crc-debug-7r7jz" Jan 12 13:54:39 crc kubenswrapper[4580]: I0112 13:54:39.202653 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-sngpg_4ce8457b-77a4-4703-b3e8-2a929d02d38d/nmstate-console-plugin/0.log" Jan 12 13:54:39 crc kubenswrapper[4580]: I0112 13:54:39.223731 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-66q7w_714937bd-e28b-4368-8f23-c141e40ea81f/nmstate-handler/0.log" Jan 12 13:54:39 crc kubenswrapper[4580]: I0112 13:54:39.234686 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-t97w5_a7c37982-a0fd-4f9d-950a-ec589bb9753c/nmstate-metrics/0.log" Jan 12 13:54:39 crc kubenswrapper[4580]: I0112 13:54:39.244114 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-t97w5_a7c37982-a0fd-4f9d-950a-ec589bb9753c/kube-rbac-proxy/0.log" Jan 12 13:54:39 crc kubenswrapper[4580]: I0112 13:54:39.261733 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-p62jb_49e54acb-8939-4c86-b9a4-42741a3356ac/nmstate-operator/0.log" Jan 12 13:54:39 crc kubenswrapper[4580]: I0112 13:54:39.273468 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-b4qbk_00b7df68-abb5-4b70-b6ef-1495cb7a4725/nmstate-webhook/0.log" Jan 12 13:54:46 crc kubenswrapper[4580]: I0112 13:54:46.949593 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:54:46 crc kubenswrapper[4580]: I0112 13:54:46.950329 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:54:48 crc kubenswrapper[4580]: I0112 13:54:48.610696 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/controller/0.log" Jan 12 13:54:48 crc kubenswrapper[4580]: I0112 13:54:48.616952 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/kube-rbac-proxy/0.log" Jan 12 13:54:48 crc kubenswrapper[4580]: I0112 13:54:48.637320 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/controller/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.843592 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.853162 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/reloader/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.859319 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr-metrics/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.865285 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.871898 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy-frr/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.878909 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-frr-files/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.886897 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-reloader/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.892939 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-metrics/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.904437 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-pkj9q_83e9a47a-80d6-4d28-ae2d-da27e069932f/frr-k8s-webhook-server/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.924003 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-797bb7bf75-nrxgs_9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b/manager/0.log" Jan 12 13:54:49 crc kubenswrapper[4580]: I0112 13:54:49.932693 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66dd5b5c84-2pdlb_cc9a79e4-90bc-4e70-afac-8b20ec13504f/webhook-server/0.log" Jan 12 13:54:50 crc kubenswrapper[4580]: I0112 13:54:50.284323 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/speaker/0.log" Jan 12 13:54:50 crc kubenswrapper[4580]: I0112 13:54:50.293674 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/kube-rbac-proxy/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.478752 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866_3b6ceadd-6368-43ec-9666-7dff30d5ee95/extract/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.486072 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866_3b6ceadd-6368-43ec-9666-7dff30d5ee95/util/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.492362 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866_3b6ceadd-6368-43ec-9666-7dff30d5ee95/pull/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.500468 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx_0089b37b-5f6c-4719-98a0-169570a8cfa6/extract/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.505491 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx_0089b37b-5f6c-4719-98a0-169570a8cfa6/util/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.511814 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx_0089b37b-5f6c-4719-98a0-169570a8cfa6/pull/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.855090 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-89lkz_45d72a58-4072-4c37-95c8-b4668060c64c/registry-server/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.860416 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-89lkz_45d72a58-4072-4c37-95c8-b4668060c64c/extract-utilities/0.log" Jan 12 13:54:53 crc kubenswrapper[4580]: I0112 13:54:53.867731 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-89lkz_45d72a58-4072-4c37-95c8-b4668060c64c/extract-content/0.log" Jan 12 13:54:54 crc kubenswrapper[4580]: I0112 13:54:54.366323 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbw9_52363f5a-5d4c-406b-bd57-cbde5f393c2c/registry-server/0.log" Jan 12 13:54:54 crc kubenswrapper[4580]: I0112 13:54:54.374138 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbw9_52363f5a-5d4c-406b-bd57-cbde5f393c2c/extract-utilities/0.log" Jan 12 13:54:54 crc kubenswrapper[4580]: I0112 13:54:54.382070 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbw9_52363f5a-5d4c-406b-bd57-cbde5f393c2c/extract-content/0.log" Jan 12 13:54:54 crc kubenswrapper[4580]: I0112 13:54:54.394412 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-n599t_53e207fa-a98f-4554-8ed8-67ffaa6e5955/marketplace-operator/0.log" Jan 12 13:54:54 crc kubenswrapper[4580]: I0112 13:54:54.536737 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w5lwr_5677e888-c379-4713-bcf6-e2e31288a0b6/registry-server/0.log" Jan 12 13:54:54 crc kubenswrapper[4580]: I0112 13:54:54.541318 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w5lwr_5677e888-c379-4713-bcf6-e2e31288a0b6/extract-utilities/0.log" Jan 12 13:54:54 crc kubenswrapper[4580]: I0112 13:54:54.547889 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w5lwr_5677e888-c379-4713-bcf6-e2e31288a0b6/extract-content/0.log" Jan 12 13:54:55 crc kubenswrapper[4580]: I0112 13:54:55.091368 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ts2q4_2ae20335-c7d3-46ef-84e6-129bc0550ab4/registry-server/0.log" Jan 12 13:54:55 crc kubenswrapper[4580]: I0112 13:54:55.099721 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ts2q4_2ae20335-c7d3-46ef-84e6-129bc0550ab4/extract-utilities/0.log" Jan 12 13:54:55 crc kubenswrapper[4580]: I0112 13:54:55.105759 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ts2q4_2ae20335-c7d3-46ef-84e6-129bc0550ab4/extract-content/0.log" Jan 12 13:55:16 crc kubenswrapper[4580]: I0112 13:55:16.949606 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:55:16 crc kubenswrapper[4580]: I0112 13:55:16.950158 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:55:16 crc kubenswrapper[4580]: I0112 13:55:16.950214 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:55:16 crc kubenswrapper[4580]: I0112 13:55:16.951198 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f056eb17695b732b04c0728c626e050e3ff330dfd43e35dfe03fa9c3f1091798"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:55:16 crc kubenswrapper[4580]: I0112 13:55:16.951257 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://f056eb17695b732b04c0728c626e050e3ff330dfd43e35dfe03fa9c3f1091798" gracePeriod=600 Jan 12 13:55:17 crc kubenswrapper[4580]: I0112 13:55:17.494858 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="f056eb17695b732b04c0728c626e050e3ff330dfd43e35dfe03fa9c3f1091798" exitCode=0 Jan 12 13:55:17 crc kubenswrapper[4580]: I0112 13:55:17.494942 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"f056eb17695b732b04c0728c626e050e3ff330dfd43e35dfe03fa9c3f1091798"} Jan 12 13:55:17 crc kubenswrapper[4580]: I0112 13:55:17.495155 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9"} Jan 12 13:55:17 crc kubenswrapper[4580]: I0112 13:55:17.495180 4580 scope.go:117] "RemoveContainer" containerID="00a7a2d612a981879dc66a6fe1919adb6186ca0faf533e44a7208cf36337f57c" Jan 12 13:55:52 crc kubenswrapper[4580]: I0112 13:55:52.724577 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/controller/0.log" Jan 12 13:55:52 crc kubenswrapper[4580]: I0112 13:55:52.730544 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/kube-rbac-proxy/0.log" Jan 12 13:55:52 crc kubenswrapper[4580]: I0112 13:55:52.748827 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/controller/0.log" Jan 12 13:55:52 crc kubenswrapper[4580]: I0112 13:55:52.850145 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-c9fsw_e8e0f177-af2a-4975-a047-6d66bcd9b474/cert-manager-controller/0.log" Jan 12 13:55:52 crc kubenswrapper[4580]: I0112 13:55:52.864425 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-56nml_5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5/cert-manager-cainjector/0.log" Jan 12 13:55:52 crc kubenswrapper[4580]: I0112 13:55:52.873928 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dkts4_56ef0925-27e4-4a8f-9a56-3e31c7176270/cert-manager-webhook/0.log" Jan 12 13:55:53 crc kubenswrapper[4580]: I0112 13:55:53.759191 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c697f55f8-69mz9_7ed21cbb-5825-4538-bfb6-74f895189d83/manager/0.log" Jan 12 13:55:53 crc kubenswrapper[4580]: I0112 13:55:53.805485 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-4b7c9_63a3c1f8-84b5-4648-9a74-bc1e980d5a57/manager/0.log" Jan 12 13:55:53 crc kubenswrapper[4580]: I0112 13:55:53.818593 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-plhvp_cbfff7ce-c184-4dee-94d5-c6ee41fc2b75/manager/0.log" Jan 12 13:55:53 crc kubenswrapper[4580]: I0112 13:55:53.826053 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/extract/0.log" Jan 12 13:55:53 crc kubenswrapper[4580]: I0112 13:55:53.835899 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/util/0.log" Jan 12 13:55:53 crc kubenswrapper[4580]: I0112 13:55:53.844027 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/pull/0.log" Jan 12 13:55:53 crc kubenswrapper[4580]: I0112 13:55:53.964444 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-75b858dccc-nr2g4_b3716289-2aa2-4e39-b8db-7980564c976e/manager/0.log" Jan 12 13:55:53 crc kubenswrapper[4580]: I0112 13:55:53.980405 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6cd7bcb4bf-nvbml_8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.014994 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-75cb9467dc-r22fp_218c7ab4-85b0-4609-87e6-35d51283e5e0/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.058227 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.073926 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/reloader/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.082280 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr-metrics/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.090807 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.096862 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy-frr/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.103299 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-frr-files/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.108900 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-reloader/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.115299 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-metrics/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.124328 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-pkj9q_83e9a47a-80d6-4d28-ae2d-da27e069932f/frr-k8s-webhook-server/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.151132 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-797bb7bf75-nrxgs_9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.159807 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66dd5b5c84-2pdlb_cc9a79e4-90bc-4e70-afac-8b20ec13504f/webhook-server/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.352665 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-2sg8z_1135f51b-1f4e-4866-bb7d-728be53f5be7/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.363159 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-plnl2_0286f995-6c82-4417-8a67-91b5e261a211/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.457549 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fckr8_726a74db-a499-4c38-8258-b711bc0dc30b/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.469605 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6684f856f9-w2xhg_bf14de2d-3f35-4c32-905c-0a133a4fbafe/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.509849 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8fxxj_87188751-ba97-4f25-ba2c-70514594cb4a/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.585208 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ckfhs_2d0f98f6-67ec-4253-a344-8aa185679126/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.592161 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/speaker/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.599857 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/kube-rbac-proxy/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.672384 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5977959f9c-sgg8q_eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.682194 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nbzhm_7209eb4d-53dc-4c30-9b80-8863acbea5a6/manager/0.log" Jan 12 13:55:54 crc kubenswrapper[4580]: I0112 13:55:54.701792 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-654686dcb9z5ths_ccb61890-3cf7-45aa-974c-693f0d14c14a/manager/0.log" Jan 12 13:55:55 crc kubenswrapper[4580]: I0112 13:55:55.570682 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-c9fsw_e8e0f177-af2a-4975-a047-6d66bcd9b474/cert-manager-controller/0.log" Jan 12 13:55:55 crc kubenswrapper[4580]: I0112 13:55:55.586840 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-56nml_5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5/cert-manager-cainjector/0.log" Jan 12 13:55:55 crc kubenswrapper[4580]: I0112 13:55:55.595016 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dkts4_56ef0925-27e4-4a8f-9a56-3e31c7176270/cert-manager-webhook/0.log" Jan 12 13:55:55 crc kubenswrapper[4580]: I0112 13:55:55.912481 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6659c7dc85-4p8jr_87237fc1-15cd-4dd9-bcfe-5a334d366896/manager/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.032739 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf8b477cb-hwd8t_bca66d95-723d-4cd6-bc4c-1a0c564606f3/operator/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.082609 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-248h5_c0f0a657-34a2-4619-b992-64ab017e6ecb/registry-server/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.137276 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-cf664874d-vznwd_742c889f-d87d-4d61-82f8-2fa3ffc3d6b2/manager/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.160536 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78c6bccb56-mggmh_f50c1909-7ba3-4d92-9e4e-2cbd2602e340/manager/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.179792 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-p4m8m_520c9385-c952-45a9-b1ce-2ad913758239/operator/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.201290 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6469d85bcb-smn7v_6a4af572-980a-4c9b-8d01-df30e894dcda/manager/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.229667 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jbnkd_3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9/control-plane-machine-set-operator/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.241800 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89mg9_bdbff407-68ae-456c-b67e-40d0e47fba7b/kube-rbac-proxy/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.252938 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89mg9_bdbff407-68ae-456c-b67e-40d0e47fba7b/machine-api-operator/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.269061 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74bd5457c5-95bcj_ed127163-4a57-4b95-9dd7-4c856bd3d126/manager/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.280553 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-698b874cb5-4v5jb_00ccc719-ee01-4ff4-934b-6e6fbadaa57c/manager/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.294375 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-2z5v7_56a7e345-fce1-44a5-aab4-8d82293bd5ee/manager/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.960524 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c697f55f8-69mz9_7ed21cbb-5825-4538-bfb6-74f895189d83/manager/0.log" Jan 12 13:55:56 crc kubenswrapper[4580]: I0112 13:55:56.997932 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-4b7c9_63a3c1f8-84b5-4648-9a74-bc1e980d5a57/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.007522 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-plhvp_cbfff7ce-c184-4dee-94d5-c6ee41fc2b75/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.019403 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/extract/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.026050 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/util/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.032766 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/pull/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.139454 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-75b858dccc-nr2g4_b3716289-2aa2-4e39-b8db-7980564c976e/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.150152 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6cd7bcb4bf-nvbml_8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.177635 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-75cb9467dc-r22fp_218c7ab4-85b0-4609-87e6-35d51283e5e0/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.466144 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-2sg8z_1135f51b-1f4e-4866-bb7d-728be53f5be7/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.477871 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-plnl2_0286f995-6c82-4417-8a67-91b5e261a211/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.554779 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fckr8_726a74db-a499-4c38-8258-b711bc0dc30b/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.564725 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6684f856f9-w2xhg_bf14de2d-3f35-4c32-905c-0a133a4fbafe/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.597850 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8fxxj_87188751-ba97-4f25-ba2c-70514594cb4a/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.622161 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-sngpg_4ce8457b-77a4-4703-b3e8-2a929d02d38d/nmstate-console-plugin/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.640684 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-66q7w_714937bd-e28b-4368-8f23-c141e40ea81f/nmstate-handler/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.648975 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ckfhs_2d0f98f6-67ec-4253-a344-8aa185679126/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.652700 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-t97w5_a7c37982-a0fd-4f9d-950a-ec589bb9753c/nmstate-metrics/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.662252 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-t97w5_a7c37982-a0fd-4f9d-950a-ec589bb9753c/kube-rbac-proxy/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.680706 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-p62jb_49e54acb-8939-4c86-b9a4-42741a3356ac/nmstate-operator/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.698794 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-b4qbk_00b7df68-abb5-4b70-b6ef-1495cb7a4725/nmstate-webhook/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.727547 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5977959f9c-sgg8q_eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.738351 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nbzhm_7209eb4d-53dc-4c30-9b80-8863acbea5a6/manager/0.log" Jan 12 13:55:57 crc kubenswrapper[4580]: I0112 13:55:57.760472 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-654686dcb9z5ths_ccb61890-3cf7-45aa-974c-693f0d14c14a/manager/0.log" Jan 12 13:55:58 crc kubenswrapper[4580]: I0112 13:55:58.983027 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6659c7dc85-4p8jr_87237fc1-15cd-4dd9-bcfe-5a334d366896/manager/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.109375 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf8b477cb-hwd8t_bca66d95-723d-4cd6-bc4c-1a0c564606f3/operator/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.162902 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-248h5_c0f0a657-34a2-4619-b992-64ab017e6ecb/registry-server/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.216377 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-cf664874d-vznwd_742c889f-d87d-4d61-82f8-2fa3ffc3d6b2/manager/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.240586 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78c6bccb56-mggmh_f50c1909-7ba3-4d92-9e4e-2cbd2602e340/manager/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.259531 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-p4m8m_520c9385-c952-45a9-b1ce-2ad913758239/operator/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.285556 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6469d85bcb-smn7v_6a4af572-980a-4c9b-8d01-df30e894dcda/manager/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.358340 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74bd5457c5-95bcj_ed127163-4a57-4b95-9dd7-4c856bd3d126/manager/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.368002 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-698b874cb5-4v5jb_00ccc719-ee01-4ff4-934b-6e6fbadaa57c/manager/0.log" Jan 12 13:55:59 crc kubenswrapper[4580]: I0112 13:55:59.378498 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-2z5v7_56a7e345-fce1-44a5-aab4-8d82293bd5ee/manager/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.763057 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/kube-multus-additional-cni-plugins/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.771186 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/egress-router-binary-copy/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.778153 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/cni-plugins/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.785222 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/bond-cni-plugin/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.791252 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/routeoverride-cni/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.800666 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/whereabouts-cni-bincopy/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.809747 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/whereabouts-cni/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.843745 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-2zrh8_e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1/multus-admission-controller/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.848960 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-2zrh8_e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1/kube-rbac-proxy/0.log" Jan 12 13:56:00 crc kubenswrapper[4580]: I0112 13:56:00.936288 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/2.log" Jan 12 13:56:01 crc kubenswrapper[4580]: I0112 13:56:01.023651 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/3.log" Jan 12 13:56:01 crc kubenswrapper[4580]: I0112 13:56:01.071141 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-jw27h_5066d8fa-2cee-4764-a817-b819d3876638/network-metrics-daemon/0.log" Jan 12 13:56:01 crc kubenswrapper[4580]: I0112 13:56:01.076548 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-jw27h_5066d8fa-2cee-4764-a817-b819d3876638/kube-rbac-proxy/0.log" Jan 12 13:56:03 crc kubenswrapper[4580]: E0112 13:56:03.935974 4580 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.25.161:53890->192.168.25.161:44257: read tcp 192.168.25.161:53890->192.168.25.161:44257: read: connection reset by peer Jan 12 13:56:46 crc kubenswrapper[4580]: I0112 13:56:46.315284 4580 generic.go:334] "Generic (PLEG): container finished" podID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerID="506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864" exitCode=0 Jan 12 13:56:46 crc kubenswrapper[4580]: I0112 13:56:46.315366 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" event={"ID":"8ef51343-57ca-4206-a6c7-e1860f15b3d7","Type":"ContainerDied","Data":"506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864"} Jan 12 13:56:46 crc kubenswrapper[4580]: I0112 13:56:46.316479 4580 scope.go:117] "RemoveContainer" containerID="506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864" Jan 12 13:56:46 crc kubenswrapper[4580]: I0112 13:56:46.962206 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bjzgj_must-gather-7w9ft_8ef51343-57ca-4206-a6c7-e1860f15b3d7/gather/0.log" Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.350358 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bjzgj/must-gather-7w9ft"] Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.351300 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" podUID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerName="copy" containerID="cri-o://d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5" gracePeriod=2 Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.356821 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bjzgj/must-gather-7w9ft"] Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.718739 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bjzgj_must-gather-7w9ft_8ef51343-57ca-4206-a6c7-e1860f15b3d7/copy/0.log" Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.719632 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.819788 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ef51343-57ca-4206-a6c7-e1860f15b3d7-must-gather-output\") pod \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\" (UID: \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\") " Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.819844 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xplq2\" (UniqueName: \"kubernetes.io/projected/8ef51343-57ca-4206-a6c7-e1860f15b3d7-kube-api-access-xplq2\") pod \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\" (UID: \"8ef51343-57ca-4206-a6c7-e1860f15b3d7\") " Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.828453 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef51343-57ca-4206-a6c7-e1860f15b3d7-kube-api-access-xplq2" (OuterVolumeSpecName: "kube-api-access-xplq2") pod "8ef51343-57ca-4206-a6c7-e1860f15b3d7" (UID: "8ef51343-57ca-4206-a6c7-e1860f15b3d7"). InnerVolumeSpecName "kube-api-access-xplq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.923217 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xplq2\" (UniqueName: \"kubernetes.io/projected/8ef51343-57ca-4206-a6c7-e1860f15b3d7-kube-api-access-xplq2\") on node \"crc\" DevicePath \"\"" Jan 12 13:56:54 crc kubenswrapper[4580]: I0112 13:56:54.973423 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef51343-57ca-4206-a6c7-e1860f15b3d7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8ef51343-57ca-4206-a6c7-e1860f15b3d7" (UID: "8ef51343-57ca-4206-a6c7-e1860f15b3d7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.024395 4580 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8ef51343-57ca-4206-a6c7-e1860f15b3d7-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.290845 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" path="/var/lib/kubelet/pods/8ef51343-57ca-4206-a6c7-e1860f15b3d7/volumes" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.438825 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bjzgj_must-gather-7w9ft_8ef51343-57ca-4206-a6c7-e1860f15b3d7/copy/0.log" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.439681 4580 generic.go:334] "Generic (PLEG): container finished" podID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerID="d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5" exitCode=143 Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.439745 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bjzgj/must-gather-7w9ft" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.439760 4580 scope.go:117] "RemoveContainer" containerID="d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.458588 4580 scope.go:117] "RemoveContainer" containerID="506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.548851 4580 scope.go:117] "RemoveContainer" containerID="d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5" Jan 12 13:56:55 crc kubenswrapper[4580]: E0112 13:56:55.549574 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5\": container with ID starting with d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5 not found: ID does not exist" containerID="d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.549621 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5"} err="failed to get container status \"d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5\": rpc error: code = NotFound desc = could not find container \"d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5\": container with ID starting with d78481b500f013a2c518bb524d60fafed6a13e14618ce7a7724faccabf7a73b5 not found: ID does not exist" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.549654 4580 scope.go:117] "RemoveContainer" containerID="506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864" Jan 12 13:56:55 crc kubenswrapper[4580]: E0112 13:56:55.550034 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864\": container with ID starting with 506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864 not found: ID does not exist" containerID="506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864" Jan 12 13:56:55 crc kubenswrapper[4580]: I0112 13:56:55.550065 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864"} err="failed to get container status \"506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864\": rpc error: code = NotFound desc = could not find container \"506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864\": container with ID starting with 506b854c02c513383c2426d79ceb448fca98ba90d9dd4a69b2e78d3caa847864 not found: ID does not exist" Jan 12 13:57:46 crc kubenswrapper[4580]: I0112 13:57:46.949830 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:57:46 crc kubenswrapper[4580]: I0112 13:57:46.950548 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:58:16 crc kubenswrapper[4580]: I0112 13:58:16.949333 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:58:16 crc kubenswrapper[4580]: I0112 13:58:16.950037 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.843993 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ftvbc/must-gather-cpgcp"] Jan 12 13:58:25 crc kubenswrapper[4580]: E0112 13:58:25.844889 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerName="gather" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.844904 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerName="gather" Jan 12 13:58:25 crc kubenswrapper[4580]: E0112 13:58:25.844923 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262028e8-0e7a-43a2-9b49-9a24e7fcd2df" containerName="container-00" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.844929 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="262028e8-0e7a-43a2-9b49-9a24e7fcd2df" containerName="container-00" Jan 12 13:58:25 crc kubenswrapper[4580]: E0112 13:58:25.844953 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerName="copy" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.844959 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerName="copy" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.845153 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="262028e8-0e7a-43a2-9b49-9a24e7fcd2df" containerName="container-00" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.845174 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerName="gather" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.845180 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef51343-57ca-4206-a6c7-e1860f15b3d7" containerName="copy" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.846068 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.850495 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ftvbc"/"kube-root-ca.crt" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.850665 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ftvbc"/"openshift-service-ca.crt" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.861222 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ftvbc/must-gather-cpgcp"] Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.935182 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-must-gather-output\") pod \"must-gather-cpgcp\" (UID: \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\") " pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 13:58:25 crc kubenswrapper[4580]: I0112 13:58:25.935260 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pq5k\" (UniqueName: \"kubernetes.io/projected/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-kube-api-access-5pq5k\") pod \"must-gather-cpgcp\" (UID: \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\") " pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 13:58:26 crc kubenswrapper[4580]: I0112 13:58:26.037530 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-must-gather-output\") pod \"must-gather-cpgcp\" (UID: \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\") " pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 13:58:26 crc kubenswrapper[4580]: I0112 13:58:26.037624 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pq5k\" (UniqueName: \"kubernetes.io/projected/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-kube-api-access-5pq5k\") pod \"must-gather-cpgcp\" (UID: \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\") " pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 13:58:26 crc kubenswrapper[4580]: I0112 13:58:26.038020 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-must-gather-output\") pod \"must-gather-cpgcp\" (UID: \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\") " pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 13:58:26 crc kubenswrapper[4580]: I0112 13:58:26.055963 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pq5k\" (UniqueName: \"kubernetes.io/projected/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-kube-api-access-5pq5k\") pod \"must-gather-cpgcp\" (UID: \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\") " pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 13:58:26 crc kubenswrapper[4580]: I0112 13:58:26.171117 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 13:58:26 crc kubenswrapper[4580]: I0112 13:58:26.601020 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ftvbc/must-gather-cpgcp"] Jan 12 13:58:27 crc kubenswrapper[4580]: I0112 13:58:27.279817 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" event={"ID":"cc5d2d66-2d38-480f-aa31-f9ef36f839c5","Type":"ContainerStarted","Data":"970bfbaef612a047f6e75c7b963e83b033be9fc1bbebfdbd20ce6136ff6fb9eb"} Jan 12 13:58:27 crc kubenswrapper[4580]: I0112 13:58:27.280143 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" event={"ID":"cc5d2d66-2d38-480f-aa31-f9ef36f839c5","Type":"ContainerStarted","Data":"59ab882425a7683f9268b29e18b90d3289323638c87c394cbf5a603593f2a2b4"} Jan 12 13:58:27 crc kubenswrapper[4580]: I0112 13:58:27.280160 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" event={"ID":"cc5d2d66-2d38-480f-aa31-f9ef36f839c5","Type":"ContainerStarted","Data":"fa55faff8a9f180a8051fc9f2dfd53cf69d1b2d642e16f1089ee8b466371593f"} Jan 12 13:58:27 crc kubenswrapper[4580]: I0112 13:58:27.300034 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" podStartSLOduration=2.300013166 podStartE2EDuration="2.300013166s" podCreationTimestamp="2026-01-12 13:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:58:27.290877818 +0000 UTC m=+3106.335096509" watchObservedRunningTime="2026-01-12 13:58:27.300013166 +0000 UTC m=+3106.344231856" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.680270 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-mvqgj"] Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.681840 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.684004 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ftvbc"/"default-dockercfg-hkbxg" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.697435 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36fddb1c-8245-4b66-ae13-f667016d97b3-host\") pod \"crc-debug-mvqgj\" (UID: \"36fddb1c-8245-4b66-ae13-f667016d97b3\") " pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.697564 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgzmq\" (UniqueName: \"kubernetes.io/projected/36fddb1c-8245-4b66-ae13-f667016d97b3-kube-api-access-xgzmq\") pod \"crc-debug-mvqgj\" (UID: \"36fddb1c-8245-4b66-ae13-f667016d97b3\") " pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.799097 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgzmq\" (UniqueName: \"kubernetes.io/projected/36fddb1c-8245-4b66-ae13-f667016d97b3-kube-api-access-xgzmq\") pod \"crc-debug-mvqgj\" (UID: \"36fddb1c-8245-4b66-ae13-f667016d97b3\") " pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.799223 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36fddb1c-8245-4b66-ae13-f667016d97b3-host\") pod \"crc-debug-mvqgj\" (UID: \"36fddb1c-8245-4b66-ae13-f667016d97b3\") " pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.799372 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36fddb1c-8245-4b66-ae13-f667016d97b3-host\") pod \"crc-debug-mvqgj\" (UID: \"36fddb1c-8245-4b66-ae13-f667016d97b3\") " pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.816883 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgzmq\" (UniqueName: \"kubernetes.io/projected/36fddb1c-8245-4b66-ae13-f667016d97b3-kube-api-access-xgzmq\") pod \"crc-debug-mvqgj\" (UID: \"36fddb1c-8245-4b66-ae13-f667016d97b3\") " pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:29 crc kubenswrapper[4580]: I0112 13:58:29.999826 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:30 crc kubenswrapper[4580]: W0112 13:58:30.035178 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36fddb1c_8245_4b66_ae13_f667016d97b3.slice/crio-5272390dcd70cebcf4ce09fef1f022de6aef257f006d55f4d12f6a1fdc9894c0 WatchSource:0}: Error finding container 5272390dcd70cebcf4ce09fef1f022de6aef257f006d55f4d12f6a1fdc9894c0: Status 404 returned error can't find the container with id 5272390dcd70cebcf4ce09fef1f022de6aef257f006d55f4d12f6a1fdc9894c0 Jan 12 13:58:30 crc kubenswrapper[4580]: I0112 13:58:30.310089 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" event={"ID":"36fddb1c-8245-4b66-ae13-f667016d97b3","Type":"ContainerStarted","Data":"4ed63371922829a497142e003198f50a1ddaa131f2f60906303a67d10a043862"} Jan 12 13:58:30 crc kubenswrapper[4580]: I0112 13:58:30.310470 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" event={"ID":"36fddb1c-8245-4b66-ae13-f667016d97b3","Type":"ContainerStarted","Data":"5272390dcd70cebcf4ce09fef1f022de6aef257f006d55f4d12f6a1fdc9894c0"} Jan 12 13:58:30 crc kubenswrapper[4580]: I0112 13:58:30.326150 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" podStartSLOduration=1.326128381 podStartE2EDuration="1.326128381s" podCreationTimestamp="2026-01-12 13:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 13:58:30.323926993 +0000 UTC m=+3109.368145672" watchObservedRunningTime="2026-01-12 13:58:30.326128381 +0000 UTC m=+3109.370347072" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.487555 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8vbfd"] Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.497582 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.518930 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vbfd"] Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.637188 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-utilities\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.637250 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ztrx\" (UniqueName: \"kubernetes.io/projected/ba1bde8e-df94-47c8-a511-82b03acfb556-kube-api-access-7ztrx\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.637276 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-catalog-content\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.739743 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-utilities\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.740021 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ztrx\" (UniqueName: \"kubernetes.io/projected/ba1bde8e-df94-47c8-a511-82b03acfb556-kube-api-access-7ztrx\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.740048 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-catalog-content\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.740300 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-utilities\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.740557 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-catalog-content\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.759912 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ztrx\" (UniqueName: \"kubernetes.io/projected/ba1bde8e-df94-47c8-a511-82b03acfb556-kube-api-access-7ztrx\") pod \"redhat-marketplace-8vbfd\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.833931 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.877394 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75699d8f8b-jqxcw_722ce4c4-5517-412c-b3c4-aafc83db85dc/barbican-api-log/0.log" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.883248 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75699d8f8b-jqxcw_722ce4c4-5517-412c-b3c4-aafc83db85dc/barbican-api/0.log" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.905475 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bfdbc7dc6-bk5g5_1b072324-9c35-458a-8d4a-1759b9ed2883/barbican-keystone-listener-log/0.log" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.919371 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bfdbc7dc6-bk5g5_1b072324-9c35-458a-8d4a-1759b9ed2883/barbican-keystone-listener/0.log" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.933250 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-655fc5cf45-jcmxp_eed9373c-ecc9-4510-bb6b-9171b70a9088/barbican-worker-log/0.log" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.940581 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-655fc5cf45-jcmxp_eed9373c-ecc9-4510-bb6b-9171b70a9088/barbican-worker/0.log" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.966218 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pjj7j_2a4039fd-f1bf-4fdd-881a-192b4b4c8a35/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:31 crc kubenswrapper[4580]: I0112 13:58:31.994429 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d8f9e6c7-d6cd-496f-b009-6bb336d25ebe/ceilometer-central-agent/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.054081 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d8f9e6c7-d6cd-496f-b009-6bb336d25ebe/ceilometer-notification-agent/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.105482 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d8f9e6c7-d6cd-496f-b009-6bb336d25ebe/sg-core/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.191743 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d8f9e6c7-d6cd-496f-b009-6bb336d25ebe/proxy-httpd/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.209909 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_977708c4-8759-44d1-8d90-6226077e8044/cinder-api-log/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.250276 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_977708c4-8759-44d1-8d90-6226077e8044/cinder-api/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.313388 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0f0d4cc9-9655-43d1-b588-ae5326765c36/cinder-scheduler/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.343529 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0f0d4cc9-9655-43d1-b588-ae5326765c36/probe/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.364467 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5hkmr_b647e7dc-a5cd-4e7e-a5fe-744a53b4c3e9/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.381629 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c6jhw_bd995c62-9850-41cf-91c1-aa47ac294147/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.426788 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-xlkwf_5eee677a-4caa-4107-a64f-cee518dfed89/dnsmasq-dns/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.435210 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-d7b79b84c-xlkwf_5eee677a-4caa-4107-a64f-cee518dfed89/init/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.450488 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8krps_3dce5050-a090-4782-a068-efafd359455a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.456005 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vbfd"] Jan 12 13:58:32 crc kubenswrapper[4580]: W0112 13:58:32.468184 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba1bde8e_df94_47c8_a511_82b03acfb556.slice/crio-7250e2aae850a82c5f06df8c138e375079457c4b9cc60161bb04334b1b244dc4 WatchSource:0}: Error finding container 7250e2aae850a82c5f06df8c138e375079457c4b9cc60161bb04334b1b244dc4: Status 404 returned error can't find the container with id 7250e2aae850a82c5f06df8c138e375079457c4b9cc60161bb04334b1b244dc4 Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.489057 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3e7614df-e73b-47f5-b7f0-d942ea24c4f0/glance-log/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.503422 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3e7614df-e73b-47f5-b7f0-d942ea24c4f0/glance-httpd/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.516166 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_026b9966-ae00-4f6a-be8d-bb1d9fffbef3/glance-log/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.537622 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_026b9966-ae00-4f6a-be8d-bb1d9fffbef3/glance-httpd/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.815487 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8699b457dd-z2fkt_92d059e4-ff2b-4ecc-ae14-6367d54e720f/horizon-log/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.896987 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8699b457dd-z2fkt_92d059e4-ff2b-4ecc-ae14-6367d54e720f/horizon/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.917197 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-tc2c4_bf2989c5-6b0d-458d-98c5-7849febf7787/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:32 crc kubenswrapper[4580]: I0112 13:58:32.941969 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6cfvt_340ac203-3af7-4abd-b75c-bf97009c24e9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:33 crc kubenswrapper[4580]: I0112 13:58:33.078692 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-696c64b546-cw888_9fad586a-c41d-44da-8144-75dcb27fe7e9/keystone-api/0.log" Jan 12 13:58:33 crc kubenswrapper[4580]: I0112 13:58:33.086019 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4651196b-71ee-434b-bb63-e77f16c744e4/kube-state-metrics/0.log" Jan 12 13:58:33 crc kubenswrapper[4580]: I0112 13:58:33.119154 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-krb6r_7537a508-8a6d-43df-8d76-a845464edfa9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:33 crc kubenswrapper[4580]: I0112 13:58:33.339948 4580 generic.go:334] "Generic (PLEG): container finished" podID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerID="badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d" exitCode=0 Jan 12 13:58:33 crc kubenswrapper[4580]: I0112 13:58:33.340164 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vbfd" event={"ID":"ba1bde8e-df94-47c8-a511-82b03acfb556","Type":"ContainerDied","Data":"badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d"} Jan 12 13:58:33 crc kubenswrapper[4580]: I0112 13:58:33.340231 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vbfd" event={"ID":"ba1bde8e-df94-47c8-a511-82b03acfb556","Type":"ContainerStarted","Data":"7250e2aae850a82c5f06df8c138e375079457c4b9cc60161bb04334b1b244dc4"} Jan 12 13:58:33 crc kubenswrapper[4580]: I0112 13:58:33.344307 4580 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.114875 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w5ftc"] Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.120976 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.149451 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w5ftc"] Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.194053 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-utilities\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.194196 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhzv\" (UniqueName: \"kubernetes.io/projected/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-kube-api-access-bkhzv\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.194349 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-catalog-content\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.296659 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-utilities\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.296868 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhzv\" (UniqueName: \"kubernetes.io/projected/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-kube-api-access-bkhzv\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.296946 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-catalog-content\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.299001 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-catalog-content\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.299273 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-utilities\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.318814 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhzv\" (UniqueName: \"kubernetes.io/projected/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-kube-api-access-bkhzv\") pod \"community-operators-w5ftc\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.358142 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vbfd" event={"ID":"ba1bde8e-df94-47c8-a511-82b03acfb556","Type":"ContainerStarted","Data":"e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0"} Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.466116 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:34 crc kubenswrapper[4580]: I0112 13:58:34.993293 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w5ftc"] Jan 12 13:58:35 crc kubenswrapper[4580]: I0112 13:58:35.365507 4580 generic.go:334] "Generic (PLEG): container finished" podID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerID="5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e" exitCode=0 Jan 12 13:58:35 crc kubenswrapper[4580]: I0112 13:58:35.365576 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5ftc" event={"ID":"78a0b29c-8dcb-4b0c-bd9a-28abc2528221","Type":"ContainerDied","Data":"5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e"} Jan 12 13:58:35 crc kubenswrapper[4580]: I0112 13:58:35.365606 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5ftc" event={"ID":"78a0b29c-8dcb-4b0c-bd9a-28abc2528221","Type":"ContainerStarted","Data":"5bee447c863a646cb43fc43479dcbaa8a15c94ce4377d06f4ba4cb3f2fee39ac"} Jan 12 13:58:35 crc kubenswrapper[4580]: I0112 13:58:35.371015 4580 generic.go:334] "Generic (PLEG): container finished" podID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerID="e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0" exitCode=0 Jan 12 13:58:35 crc kubenswrapper[4580]: I0112 13:58:35.371066 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vbfd" event={"ID":"ba1bde8e-df94-47c8-a511-82b03acfb556","Type":"ContainerDied","Data":"e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0"} Jan 12 13:58:36 crc kubenswrapper[4580]: I0112 13:58:36.381040 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vbfd" event={"ID":"ba1bde8e-df94-47c8-a511-82b03acfb556","Type":"ContainerStarted","Data":"2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a"} Jan 12 13:58:36 crc kubenswrapper[4580]: I0112 13:58:36.400174 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8vbfd" podStartSLOduration=2.72747484 podStartE2EDuration="5.400157751s" podCreationTimestamp="2026-01-12 13:58:31 +0000 UTC" firstStartedPulling="2026-01-12 13:58:33.344045261 +0000 UTC m=+3112.388263951" lastFinishedPulling="2026-01-12 13:58:36.016728172 +0000 UTC m=+3115.060946862" observedRunningTime="2026-01-12 13:58:36.397124358 +0000 UTC m=+3115.441343049" watchObservedRunningTime="2026-01-12 13:58:36.400157751 +0000 UTC m=+3115.444376441" Jan 12 13:58:37 crc kubenswrapper[4580]: I0112 13:58:37.390181 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5ftc" event={"ID":"78a0b29c-8dcb-4b0c-bd9a-28abc2528221","Type":"ContainerStarted","Data":"57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0"} Jan 12 13:58:38 crc kubenswrapper[4580]: I0112 13:58:38.402853 4580 generic.go:334] "Generic (PLEG): container finished" podID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerID="57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0" exitCode=0 Jan 12 13:58:38 crc kubenswrapper[4580]: I0112 13:58:38.402959 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5ftc" event={"ID":"78a0b29c-8dcb-4b0c-bd9a-28abc2528221","Type":"ContainerDied","Data":"57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0"} Jan 12 13:58:39 crc kubenswrapper[4580]: I0112 13:58:39.418774 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5ftc" event={"ID":"78a0b29c-8dcb-4b0c-bd9a-28abc2528221","Type":"ContainerStarted","Data":"dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5"} Jan 12 13:58:39 crc kubenswrapper[4580]: I0112 13:58:39.443542 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w5ftc" podStartSLOduration=1.712896981 podStartE2EDuration="5.443523885s" podCreationTimestamp="2026-01-12 13:58:34 +0000 UTC" firstStartedPulling="2026-01-12 13:58:35.36868856 +0000 UTC m=+3114.412907249" lastFinishedPulling="2026-01-12 13:58:39.099315462 +0000 UTC m=+3118.143534153" observedRunningTime="2026-01-12 13:58:39.439843715 +0000 UTC m=+3118.484062405" watchObservedRunningTime="2026-01-12 13:58:39.443523885 +0000 UTC m=+3118.487742575" Jan 12 13:58:41 crc kubenswrapper[4580]: I0112 13:58:41.834059 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:41 crc kubenswrapper[4580]: I0112 13:58:41.834486 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:41 crc kubenswrapper[4580]: I0112 13:58:41.873827 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:42 crc kubenswrapper[4580]: I0112 13:58:42.510590 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:43 crc kubenswrapper[4580]: I0112 13:58:43.083032 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vbfd"] Jan 12 13:58:44 crc kubenswrapper[4580]: I0112 13:58:44.466280 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:44 crc kubenswrapper[4580]: I0112 13:58:44.466675 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:44 crc kubenswrapper[4580]: I0112 13:58:44.487905 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8vbfd" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerName="registry-server" containerID="cri-o://2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a" gracePeriod=2 Jan 12 13:58:44 crc kubenswrapper[4580]: I0112 13:58:44.522875 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:44 crc kubenswrapper[4580]: I0112 13:58:44.582818 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:44 crc kubenswrapper[4580]: I0112 13:58:44.967837 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.138835 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ztrx\" (UniqueName: \"kubernetes.io/projected/ba1bde8e-df94-47c8-a511-82b03acfb556-kube-api-access-7ztrx\") pod \"ba1bde8e-df94-47c8-a511-82b03acfb556\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.138883 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-catalog-content\") pod \"ba1bde8e-df94-47c8-a511-82b03acfb556\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.138980 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-utilities\") pod \"ba1bde8e-df94-47c8-a511-82b03acfb556\" (UID: \"ba1bde8e-df94-47c8-a511-82b03acfb556\") " Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.140135 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-utilities" (OuterVolumeSpecName: "utilities") pod "ba1bde8e-df94-47c8-a511-82b03acfb556" (UID: "ba1bde8e-df94-47c8-a511-82b03acfb556"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.161010 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1bde8e-df94-47c8-a511-82b03acfb556-kube-api-access-7ztrx" (OuterVolumeSpecName: "kube-api-access-7ztrx") pod "ba1bde8e-df94-47c8-a511-82b03acfb556" (UID: "ba1bde8e-df94-47c8-a511-82b03acfb556"). InnerVolumeSpecName "kube-api-access-7ztrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.165233 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba1bde8e-df94-47c8-a511-82b03acfb556" (UID: "ba1bde8e-df94-47c8-a511-82b03acfb556"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.240998 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ztrx\" (UniqueName: \"kubernetes.io/projected/ba1bde8e-df94-47c8-a511-82b03acfb556-kube-api-access-7ztrx\") on node \"crc\" DevicePath \"\"" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.241031 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.241042 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1bde8e-df94-47c8-a511-82b03acfb556-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.504918 4580 generic.go:334] "Generic (PLEG): container finished" podID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerID="2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a" exitCode=0 Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.505119 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vbfd" event={"ID":"ba1bde8e-df94-47c8-a511-82b03acfb556","Type":"ContainerDied","Data":"2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a"} Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.505192 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vbfd" event={"ID":"ba1bde8e-df94-47c8-a511-82b03acfb556","Type":"ContainerDied","Data":"7250e2aae850a82c5f06df8c138e375079457c4b9cc60161bb04334b1b244dc4"} Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.505222 4580 scope.go:117] "RemoveContainer" containerID="2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.505308 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vbfd" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.547769 4580 scope.go:117] "RemoveContainer" containerID="e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.548054 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vbfd"] Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.559831 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vbfd"] Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.602800 4580 scope.go:117] "RemoveContainer" containerID="badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.629312 4580 scope.go:117] "RemoveContainer" containerID="2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a" Jan 12 13:58:45 crc kubenswrapper[4580]: E0112 13:58:45.632613 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a\": container with ID starting with 2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a not found: ID does not exist" containerID="2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.632651 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a"} err="failed to get container status \"2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a\": rpc error: code = NotFound desc = could not find container \"2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a\": container with ID starting with 2c1d5c246a5af653e6e89e129c37498430501094eae15fdc5e70c1e0f9d62d8a not found: ID does not exist" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.632682 4580 scope.go:117] "RemoveContainer" containerID="e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0" Jan 12 13:58:45 crc kubenswrapper[4580]: E0112 13:58:45.633995 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0\": container with ID starting with e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0 not found: ID does not exist" containerID="e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.634040 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0"} err="failed to get container status \"e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0\": rpc error: code = NotFound desc = could not find container \"e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0\": container with ID starting with e354f3b4638ce4d739a546e7525bd20409416355fdaebfd8658208b9693e1cd0 not found: ID does not exist" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.634071 4580 scope.go:117] "RemoveContainer" containerID="badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d" Jan 12 13:58:45 crc kubenswrapper[4580]: E0112 13:58:45.634350 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d\": container with ID starting with badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d not found: ID does not exist" containerID="badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.634375 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d"} err="failed to get container status \"badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d\": rpc error: code = NotFound desc = could not find container \"badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d\": container with ID starting with badb0c8338d38ef8e5d3516b54a249d2c5a5ed6b1ad77114d830422e9c6ae29d not found: ID does not exist" Jan 12 13:58:45 crc kubenswrapper[4580]: I0112 13:58:45.882976 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w5ftc"] Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.515367 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w5ftc" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerName="registry-server" containerID="cri-o://dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5" gracePeriod=2 Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.934892 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.962017 4580 patch_prober.go:28] interesting pod/machine-config-daemon-hdz6l container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.962073 4580 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.962148 4580 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.963433 4580 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9"} pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.963505 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerName="machine-config-daemon" containerID="cri-o://094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" gracePeriod=600 Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.979392 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkhzv\" (UniqueName: \"kubernetes.io/projected/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-kube-api-access-bkhzv\") pod \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.980286 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-utilities\") pod \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.980516 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-catalog-content\") pod \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\" (UID: \"78a0b29c-8dcb-4b0c-bd9a-28abc2528221\") " Jan 12 13:58:46 crc kubenswrapper[4580]: I0112 13:58:46.986690 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-utilities" (OuterVolumeSpecName: "utilities") pod "78a0b29c-8dcb-4b0c-bd9a-28abc2528221" (UID: "78a0b29c-8dcb-4b0c-bd9a-28abc2528221"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.007325 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-kube-api-access-bkhzv" (OuterVolumeSpecName: "kube-api-access-bkhzv") pod "78a0b29c-8dcb-4b0c-bd9a-28abc2528221" (UID: "78a0b29c-8dcb-4b0c-bd9a-28abc2528221"). InnerVolumeSpecName "kube-api-access-bkhzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.042942 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78a0b29c-8dcb-4b0c-bd9a-28abc2528221" (UID: "78a0b29c-8dcb-4b0c-bd9a-28abc2528221"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.082020 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.082053 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkhzv\" (UniqueName: \"kubernetes.io/projected/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-kube-api-access-bkhzv\") on node \"crc\" DevicePath \"\"" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.082066 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a0b29c-8dcb-4b0c-bd9a-28abc2528221-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 13:58:47 crc kubenswrapper[4580]: E0112 13:58:47.085657 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.291158 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" path="/var/lib/kubelet/pods/ba1bde8e-df94-47c8-a511-82b03acfb556/volumes" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.526303 4580 generic.go:334] "Generic (PLEG): container finished" podID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerID="dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5" exitCode=0 Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.526379 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5ftc" event={"ID":"78a0b29c-8dcb-4b0c-bd9a-28abc2528221","Type":"ContainerDied","Data":"dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5"} Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.526412 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5ftc" event={"ID":"78a0b29c-8dcb-4b0c-bd9a-28abc2528221","Type":"ContainerDied","Data":"5bee447c863a646cb43fc43479dcbaa8a15c94ce4377d06f4ba4cb3f2fee39ac"} Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.526431 4580 scope.go:117] "RemoveContainer" containerID="dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.526585 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5ftc" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.532547 4580 generic.go:334] "Generic (PLEG): container finished" podID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" exitCode=0 Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.532638 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerDied","Data":"094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9"} Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.533645 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 13:58:47 crc kubenswrapper[4580]: E0112 13:58:47.533951 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.548270 4580 scope.go:117] "RemoveContainer" containerID="57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.576145 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w5ftc"] Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.582882 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w5ftc"] Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.617071 4580 scope.go:117] "RemoveContainer" containerID="5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.647339 4580 scope.go:117] "RemoveContainer" containerID="dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5" Jan 12 13:58:47 crc kubenswrapper[4580]: E0112 13:58:47.655361 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5\": container with ID starting with dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5 not found: ID does not exist" containerID="dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.655423 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5"} err="failed to get container status \"dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5\": rpc error: code = NotFound desc = could not find container \"dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5\": container with ID starting with dc5f38c603f2d8a423edf31a21c0827ddfa20bf29fb5d3a89512c013d6e6e4d5 not found: ID does not exist" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.655453 4580 scope.go:117] "RemoveContainer" containerID="57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0" Jan 12 13:58:47 crc kubenswrapper[4580]: E0112 13:58:47.660314 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0\": container with ID starting with 57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0 not found: ID does not exist" containerID="57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.660340 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0"} err="failed to get container status \"57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0\": rpc error: code = NotFound desc = could not find container \"57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0\": container with ID starting with 57d225011c9eb674f7e04f7e2b91c9669626a4702f7ed2e299cb9e281598bdc0 not found: ID does not exist" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.660355 4580 scope.go:117] "RemoveContainer" containerID="5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e" Jan 12 13:58:47 crc kubenswrapper[4580]: E0112 13:58:47.662409 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e\": container with ID starting with 5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e not found: ID does not exist" containerID="5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.662444 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e"} err="failed to get container status \"5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e\": rpc error: code = NotFound desc = could not find container \"5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e\": container with ID starting with 5fb434567863163011b295062d8db77f322539fd7455491be716a0a0c204476e not found: ID does not exist" Jan 12 13:58:47 crc kubenswrapper[4580]: I0112 13:58:47.662472 4580 scope.go:117] "RemoveContainer" containerID="f056eb17695b732b04c0728c626e050e3ff330dfd43e35dfe03fa9c3f1091798" Jan 12 13:58:48 crc kubenswrapper[4580]: I0112 13:58:48.279871 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0c2b68c0-cf75-4b38-b7f5-c58b9f52e818/memcached/0.log" Jan 12 13:58:48 crc kubenswrapper[4580]: I0112 13:58:48.361510 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c66d9fb7c-tgbgc_d56cc382-ea8e-4cea-829a-80335a2b71c9/neutron-api/0.log" Jan 12 13:58:48 crc kubenswrapper[4580]: I0112 13:58:48.405251 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c66d9fb7c-tgbgc_d56cc382-ea8e-4cea-829a-80335a2b71c9/neutron-httpd/0.log" Jan 12 13:58:48 crc kubenswrapper[4580]: I0112 13:58:48.425374 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-sds6t_6824de1f-1f07-45a9-b65d-6d1aadc863db/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:48 crc kubenswrapper[4580]: I0112 13:58:48.585285 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_af33dae1-afd6-4b08-a507-64373650c025/nova-api-log/0.log" Jan 12 13:58:48 crc kubenswrapper[4580]: I0112 13:58:48.897015 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_af33dae1-afd6-4b08-a507-64373650c025/nova-api-api/0.log" Jan 12 13:58:48 crc kubenswrapper[4580]: I0112 13:58:48.994800 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_621c9246-ea68-42b4-b799-961af70ca4f5/nova-cell0-conductor-conductor/0.log" Jan 12 13:58:49 crc kubenswrapper[4580]: I0112 13:58:49.134274 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5473becf-161f-49fe-86c0-079d4a9d80dc/nova-cell1-conductor-conductor/0.log" Jan 12 13:58:49 crc kubenswrapper[4580]: I0112 13:58:49.206879 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8250a29f-e3f4-4a06-bda0-1bcd2cb9bc9f/nova-cell1-novncproxy-novncproxy/0.log" Jan 12 13:58:49 crc kubenswrapper[4580]: I0112 13:58:49.252187 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pdcpd_2b14f1aa-0c20-4db8-9a42-8abf7baf0140/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:49 crc kubenswrapper[4580]: I0112 13:58:49.294912 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" path="/var/lib/kubelet/pods/78a0b29c-8dcb-4b0c-bd9a-28abc2528221/volumes" Jan 12 13:58:49 crc kubenswrapper[4580]: I0112 13:58:49.328112 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_35b40e0a-79b7-4ca2-8aa2-f6de40c60088/nova-metadata-log/0.log" Jan 12 13:58:49 crc kubenswrapper[4580]: I0112 13:58:49.985475 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_35b40e0a-79b7-4ca2-8aa2-f6de40c60088/nova-metadata-metadata/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.120815 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2a896fc4-1b0f-4186-a168-437fd8a099ea/nova-scheduler-scheduler/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.148367 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29452d40-93df-4c9f-9d79-70fbf3907de1/galera/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.159803 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29452d40-93df-4c9f-9d79-70fbf3907de1/mysql-bootstrap/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.182655 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ceae97e-0cf6-4019-90ba-931df3f6dbed/galera/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.195005 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2ceae97e-0cf6-4019-90ba-931df3f6dbed/mysql-bootstrap/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.201661 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d04e360b-50ce-4cb6-9168-b7592de2d83e/openstackclient/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.211708 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-24mw4_8791a7b2-1c8a-4551-94d2-379d8a7aa153/openstack-network-exporter/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.224684 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-66wld_b20197ec-909c-4343-a0ed-e99b88ea6f83/ovsdb-server/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.237723 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-66wld_b20197ec-909c-4343-a0ed-e99b88ea6f83/ovs-vswitchd/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.243451 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-66wld_b20197ec-909c-4343-a0ed-e99b88ea6f83/ovsdb-server-init/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.256209 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-tbpzb_29dabf99-ffd5-4d31-b9e5-b10e192f239d/ovn-controller/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.289480 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-74485_06c2f69b-a49e-42fb-9532-837b04bdff07/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.303820 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_11cab738-4c7c-4949-9a8c-50b8c1bca314/ovn-northd/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.308771 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_11cab738-4c7c-4949-9a8c-50b8c1bca314/openstack-network-exporter/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.319749 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_396e4fc0-cb2e-4543-b1ae-d61eec6a365a/ovsdbserver-nb/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.324589 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_396e4fc0-cb2e-4543-b1ae-d61eec6a365a/openstack-network-exporter/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.335635 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a6c58bf4-8891-45e6-9be6-a3176eefbc14/ovsdbserver-sb/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.342251 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a6c58bf4-8891-45e6-9be6-a3176eefbc14/openstack-network-exporter/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.398942 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffb74c678-h5ddl_89dcb711-9d18-46e9-9f17-280f0f4c0e1a/placement-log/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.430063 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffb74c678-h5ddl_89dcb711-9d18-46e9-9f17-280f0f4c0e1a/placement-api/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.447739 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c7dd413-5eac-4da3-ba06-0917a412956d/rabbitmq/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.451167 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c7dd413-5eac-4da3-ba06-0917a412956d/setup-container/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.470377 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_45d6b817-38ed-4b91-b375-d0b358eaab0b/rabbitmq/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.473480 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_45d6b817-38ed-4b91-b375-d0b358eaab0b/setup-container/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.492553 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v7cx7_2e6e07b6-c923-4d65-8bda-8fb27915bb72/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.500493 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xkg7q_007f6af6-a125-443f-a2ff-1b1322aefca5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.518719 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zccg8_16ecca65-5485-450e-8a2b-06f5e3558fc6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.531386 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nbqdl_0434e0b6-16ec-4821-b1b0-c823fc51a965/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.555395 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9cvjw_212e2cae-eea9-4f9c-a1f0-87708f00ab9a/ssh-known-hosts-edpm-deployment/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.641166 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f676d8c57-qq454_8d448ad1-2ef8-48cd-8e3c-3e81e82da286/proxy-httpd/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.655650 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f676d8c57-qq454_8d448ad1-2ef8-48cd-8e3c-3e81e82da286/proxy-server/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.662530 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cplpv_91a2e8de-56e6-41e5-a8fa-a576e8970ebd/swift-ring-rebalance/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.682890 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/account-server/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.704801 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/account-replicator/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.709268 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/account-auditor/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.715673 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/account-reaper/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.723430 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/container-server/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.747758 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/container-replicator/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.759977 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/container-auditor/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.770715 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/container-updater/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.779862 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-server/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.800327 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-replicator/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.812992 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-auditor/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.820665 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-updater/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.827852 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/object-expirer/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.834810 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/rsync/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.841357 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fb14d02e-b9af-4072-a2bd-2c2763d29755/swift-recon-cron/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.899675 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-kzrnv_1e5c1e6d-1fc0-4199-ae0d-67c093f94192/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.943074 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_8e031ef3-1afa-438b-8f95-cd63e4d5eb5a/tempest-tests-tempest-tests-runner/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.949840 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_be1a4134-b582-42ee-b8d3-145911d7bdec/test-operator-logs-container/0.log" Jan 12 13:58:50 crc kubenswrapper[4580]: I0112 13:58:50.965727 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5hwt7_553cda0e-1691-4748-8a47-d34d8600ea2e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 12 13:58:53 crc kubenswrapper[4580]: I0112 13:58:53.312441 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/controller/0.log" Jan 12 13:58:53 crc kubenswrapper[4580]: I0112 13:58:53.318888 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/kube-rbac-proxy/0.log" Jan 12 13:58:53 crc kubenswrapper[4580]: I0112 13:58:53.340118 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/controller/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.748729 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.758024 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/reloader/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.769709 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr-metrics/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.775693 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.781576 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy-frr/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.787289 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-frr-files/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.793854 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-reloader/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.799000 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-metrics/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.806166 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-pkj9q_83e9a47a-80d6-4d28-ae2d-da27e069932f/frr-k8s-webhook-server/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.832961 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-797bb7bf75-nrxgs_9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b/manager/0.log" Jan 12 13:58:54 crc kubenswrapper[4580]: I0112 13:58:54.843844 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66dd5b5c84-2pdlb_cc9a79e4-90bc-4e70-afac-8b20ec13504f/webhook-server/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.110077 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/speaker/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.114851 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/kube-rbac-proxy/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.413128 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c697f55f8-69mz9_7ed21cbb-5825-4538-bfb6-74f895189d83/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.443461 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-4b7c9_63a3c1f8-84b5-4648-9a74-bc1e980d5a57/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.454301 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-plhvp_cbfff7ce-c184-4dee-94d5-c6ee41fc2b75/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.465449 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/extract/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.476296 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/util/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.486792 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/pull/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.559143 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-75b858dccc-nr2g4_b3716289-2aa2-4e39-b8db-7980564c976e/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.567277 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6cd7bcb4bf-nvbml_8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.591275 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-75cb9467dc-r22fp_218c7ab4-85b0-4609-87e6-35d51283e5e0/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.827394 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-2sg8z_1135f51b-1f4e-4866-bb7d-728be53f5be7/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.837741 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-plnl2_0286f995-6c82-4417-8a67-91b5e261a211/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.885595 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fckr8_726a74db-a499-4c38-8258-b711bc0dc30b/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.894604 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6684f856f9-w2xhg_bf14de2d-3f35-4c32-905c-0a133a4fbafe/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.921350 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8fxxj_87188751-ba97-4f25-ba2c-70514594cb4a/manager/0.log" Jan 12 13:58:55 crc kubenswrapper[4580]: I0112 13:58:55.963363 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ckfhs_2d0f98f6-67ec-4253-a344-8aa185679126/manager/0.log" Jan 12 13:58:56 crc kubenswrapper[4580]: I0112 13:58:56.033673 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5977959f9c-sgg8q_eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b/manager/0.log" Jan 12 13:58:56 crc kubenswrapper[4580]: I0112 13:58:56.041611 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nbzhm_7209eb4d-53dc-4c30-9b80-8863acbea5a6/manager/0.log" Jan 12 13:58:56 crc kubenswrapper[4580]: I0112 13:58:56.051389 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-654686dcb9z5ths_ccb61890-3cf7-45aa-974c-693f0d14c14a/manager/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.106651 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6659c7dc85-4p8jr_87237fc1-15cd-4dd9-bcfe-5a334d366896/manager/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.214616 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf8b477cb-hwd8t_bca66d95-723d-4cd6-bc4c-1a0c564606f3/operator/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.276725 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-248h5_c0f0a657-34a2-4619-b992-64ab017e6ecb/registry-server/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.376551 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-cf664874d-vznwd_742c889f-d87d-4d61-82f8-2fa3ffc3d6b2/manager/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.396291 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78c6bccb56-mggmh_f50c1909-7ba3-4d92-9e4e-2cbd2602e340/manager/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.410416 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-p4m8m_520c9385-c952-45a9-b1ce-2ad913758239/operator/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.429832 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6469d85bcb-smn7v_6a4af572-980a-4c9b-8d01-df30e894dcda/manager/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.475816 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74bd5457c5-95bcj_ed127163-4a57-4b95-9dd7-4c856bd3d126/manager/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.483897 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-698b874cb5-4v5jb_00ccc719-ee01-4ff4-934b-6e6fbadaa57c/manager/0.log" Jan 12 13:58:57 crc kubenswrapper[4580]: I0112 13:58:57.493245 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-2z5v7_56a7e345-fce1-44a5-aab4-8d82293bd5ee/manager/0.log" Jan 12 13:58:58 crc kubenswrapper[4580]: I0112 13:58:58.631689 4580 generic.go:334] "Generic (PLEG): container finished" podID="36fddb1c-8245-4b66-ae13-f667016d97b3" containerID="4ed63371922829a497142e003198f50a1ddaa131f2f60906303a67d10a043862" exitCode=0 Jan 12 13:58:58 crc kubenswrapper[4580]: I0112 13:58:58.631782 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" event={"ID":"36fddb1c-8245-4b66-ae13-f667016d97b3","Type":"ContainerDied","Data":"4ed63371922829a497142e003198f50a1ddaa131f2f60906303a67d10a043862"} Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.738564 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.743086 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36fddb1c-8245-4b66-ae13-f667016d97b3-host\") pod \"36fddb1c-8245-4b66-ae13-f667016d97b3\" (UID: \"36fddb1c-8245-4b66-ae13-f667016d97b3\") " Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.743173 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36fddb1c-8245-4b66-ae13-f667016d97b3-host" (OuterVolumeSpecName: "host") pod "36fddb1c-8245-4b66-ae13-f667016d97b3" (UID: "36fddb1c-8245-4b66-ae13-f667016d97b3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.743241 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgzmq\" (UniqueName: \"kubernetes.io/projected/36fddb1c-8245-4b66-ae13-f667016d97b3-kube-api-access-xgzmq\") pod \"36fddb1c-8245-4b66-ae13-f667016d97b3\" (UID: \"36fddb1c-8245-4b66-ae13-f667016d97b3\") " Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.743814 4580 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36fddb1c-8245-4b66-ae13-f667016d97b3-host\") on node \"crc\" DevicePath \"\"" Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.752345 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fddb1c-8245-4b66-ae13-f667016d97b3-kube-api-access-xgzmq" (OuterVolumeSpecName: "kube-api-access-xgzmq") pod "36fddb1c-8245-4b66-ae13-f667016d97b3" (UID: "36fddb1c-8245-4b66-ae13-f667016d97b3"). InnerVolumeSpecName "kube-api-access-xgzmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.770307 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-mvqgj"] Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.776630 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-mvqgj"] Jan 12 13:58:59 crc kubenswrapper[4580]: I0112 13:58:59.845488 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgzmq\" (UniqueName: \"kubernetes.io/projected/36fddb1c-8245-4b66-ae13-f667016d97b3-kube-api-access-xgzmq\") on node \"crc\" DevicePath \"\"" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.652950 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-mvqgj" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.657309 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5272390dcd70cebcf4ce09fef1f022de6aef257f006d55f4d12f6a1fdc9894c0" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.918613 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-k7l6r"] Jan 12 13:59:00 crc kubenswrapper[4580]: E0112 13:59:00.919051 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fddb1c-8245-4b66-ae13-f667016d97b3" containerName="container-00" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919066 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fddb1c-8245-4b66-ae13-f667016d97b3" containerName="container-00" Jan 12 13:59:00 crc kubenswrapper[4580]: E0112 13:59:00.919078 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerName="extract-content" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919085 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerName="extract-content" Jan 12 13:59:00 crc kubenswrapper[4580]: E0112 13:59:00.919094 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerName="registry-server" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919116 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerName="registry-server" Jan 12 13:59:00 crc kubenswrapper[4580]: E0112 13:59:00.919127 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerName="extract-utilities" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919136 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerName="extract-utilities" Jan 12 13:59:00 crc kubenswrapper[4580]: E0112 13:59:00.919152 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerName="registry-server" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919157 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerName="registry-server" Jan 12 13:59:00 crc kubenswrapper[4580]: E0112 13:59:00.919179 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerName="extract-utilities" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919185 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerName="extract-utilities" Jan 12 13:59:00 crc kubenswrapper[4580]: E0112 13:59:00.919200 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerName="extract-content" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919206 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerName="extract-content" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919399 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a0b29c-8dcb-4b0c-bd9a-28abc2528221" containerName="registry-server" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919423 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1bde8e-df94-47c8-a511-82b03acfb556" containerName="registry-server" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.919432 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="36fddb1c-8245-4b66-ae13-f667016d97b3" containerName="container-00" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.920150 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.922211 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ftvbc"/"default-dockercfg-hkbxg" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.970113 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv276\" (UniqueName: \"kubernetes.io/projected/b9b27046-0f28-48bb-b910-f31879c70f5d-kube-api-access-fv276\") pod \"crc-debug-k7l6r\" (UID: \"b9b27046-0f28-48bb-b910-f31879c70f5d\") " pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:00 crc kubenswrapper[4580]: I0112 13:59:00.970437 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b27046-0f28-48bb-b910-f31879c70f5d-host\") pod \"crc-debug-k7l6r\" (UID: \"b9b27046-0f28-48bb-b910-f31879c70f5d\") " pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.072534 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv276\" (UniqueName: \"kubernetes.io/projected/b9b27046-0f28-48bb-b910-f31879c70f5d-kube-api-access-fv276\") pod \"crc-debug-k7l6r\" (UID: \"b9b27046-0f28-48bb-b910-f31879c70f5d\") " pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.072626 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b27046-0f28-48bb-b910-f31879c70f5d-host\") pod \"crc-debug-k7l6r\" (UID: \"b9b27046-0f28-48bb-b910-f31879c70f5d\") " pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.072753 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b27046-0f28-48bb-b910-f31879c70f5d-host\") pod \"crc-debug-k7l6r\" (UID: \"b9b27046-0f28-48bb-b910-f31879c70f5d\") " pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.089399 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv276\" (UniqueName: \"kubernetes.io/projected/b9b27046-0f28-48bb-b910-f31879c70f5d-kube-api-access-fv276\") pod \"crc-debug-k7l6r\" (UID: \"b9b27046-0f28-48bb-b910-f31879c70f5d\") " pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.238061 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.301094 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36fddb1c-8245-4b66-ae13-f667016d97b3" path="/var/lib/kubelet/pods/36fddb1c-8245-4b66-ae13-f667016d97b3/volumes" Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.660928 4580 generic.go:334] "Generic (PLEG): container finished" podID="b9b27046-0f28-48bb-b910-f31879c70f5d" containerID="c3cd2c264758b029bc2dcf323a88b13dd6d66de433dbfc0d498240588af200eb" exitCode=0 Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.660978 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" event={"ID":"b9b27046-0f28-48bb-b910-f31879c70f5d","Type":"ContainerDied","Data":"c3cd2c264758b029bc2dcf323a88b13dd6d66de433dbfc0d498240588af200eb"} Jan 12 13:59:01 crc kubenswrapper[4580]: I0112 13:59:01.661335 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" event={"ID":"b9b27046-0f28-48bb-b910-f31879c70f5d","Type":"ContainerStarted","Data":"b3737ddf0362d96587bdf571178d45c1c48b16bf816f67f3c94cdfc502a4a4f0"} Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.061348 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jbnkd_3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9/control-plane-machine-set-operator/0.log" Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.072826 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89mg9_bdbff407-68ae-456c-b67e-40d0e47fba7b/kube-rbac-proxy/0.log" Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.081487 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89mg9_bdbff407-68ae-456c-b67e-40d0e47fba7b/machine-api-operator/0.log" Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.092446 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-k7l6r"] Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.097486 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-k7l6r"] Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.282370 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 13:59:02 crc kubenswrapper[4580]: E0112 13:59:02.282748 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.759455 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.909507 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv276\" (UniqueName: \"kubernetes.io/projected/b9b27046-0f28-48bb-b910-f31879c70f5d-kube-api-access-fv276\") pod \"b9b27046-0f28-48bb-b910-f31879c70f5d\" (UID: \"b9b27046-0f28-48bb-b910-f31879c70f5d\") " Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.910336 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b27046-0f28-48bb-b910-f31879c70f5d-host\") pod \"b9b27046-0f28-48bb-b910-f31879c70f5d\" (UID: \"b9b27046-0f28-48bb-b910-f31879c70f5d\") " Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.910451 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b27046-0f28-48bb-b910-f31879c70f5d-host" (OuterVolumeSpecName: "host") pod "b9b27046-0f28-48bb-b910-f31879c70f5d" (UID: "b9b27046-0f28-48bb-b910-f31879c70f5d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.911082 4580 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b27046-0f28-48bb-b910-f31879c70f5d-host\") on node \"crc\" DevicePath \"\"" Jan 12 13:59:02 crc kubenswrapper[4580]: I0112 13:59:02.928605 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b27046-0f28-48bb-b910-f31879c70f5d-kube-api-access-fv276" (OuterVolumeSpecName: "kube-api-access-fv276") pod "b9b27046-0f28-48bb-b910-f31879c70f5d" (UID: "b9b27046-0f28-48bb-b910-f31879c70f5d"). InnerVolumeSpecName "kube-api-access-fv276". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.012378 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv276\" (UniqueName: \"kubernetes.io/projected/b9b27046-0f28-48bb-b910-f31879c70f5d-kube-api-access-fv276\") on node \"crc\" DevicePath \"\"" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.271804 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-fv87d"] Jan 12 13:59:03 crc kubenswrapper[4580]: E0112 13:59:03.272187 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b27046-0f28-48bb-b910-f31879c70f5d" containerName="container-00" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.272201 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b27046-0f28-48bb-b910-f31879c70f5d" containerName="container-00" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.272402 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b27046-0f28-48bb-b910-f31879c70f5d" containerName="container-00" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.272969 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.293544 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b27046-0f28-48bb-b910-f31879c70f5d" path="/var/lib/kubelet/pods/b9b27046-0f28-48bb-b910-f31879c70f5d/volumes" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.423117 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2273c43-973b-4dd6-bab9-542937931d50-host\") pod \"crc-debug-fv87d\" (UID: \"a2273c43-973b-4dd6-bab9-542937931d50\") " pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.423199 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxjh\" (UniqueName: \"kubernetes.io/projected/a2273c43-973b-4dd6-bab9-542937931d50-kube-api-access-pqxjh\") pod \"crc-debug-fv87d\" (UID: \"a2273c43-973b-4dd6-bab9-542937931d50\") " pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.526801 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2273c43-973b-4dd6-bab9-542937931d50-host\") pod \"crc-debug-fv87d\" (UID: \"a2273c43-973b-4dd6-bab9-542937931d50\") " pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.527205 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqxjh\" (UniqueName: \"kubernetes.io/projected/a2273c43-973b-4dd6-bab9-542937931d50-kube-api-access-pqxjh\") pod \"crc-debug-fv87d\" (UID: \"a2273c43-973b-4dd6-bab9-542937931d50\") " pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.528371 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2273c43-973b-4dd6-bab9-542937931d50-host\") pod \"crc-debug-fv87d\" (UID: \"a2273c43-973b-4dd6-bab9-542937931d50\") " pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.546173 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqxjh\" (UniqueName: \"kubernetes.io/projected/a2273c43-973b-4dd6-bab9-542937931d50-kube-api-access-pqxjh\") pod \"crc-debug-fv87d\" (UID: \"a2273c43-973b-4dd6-bab9-542937931d50\") " pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.587414 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:03 crc kubenswrapper[4580]: W0112 13:59:03.614840 4580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2273c43_973b_4dd6_bab9_542937931d50.slice/crio-130e04f75baccd5914a1944b9cd4f8ba019cf924ecee2ee767c1fac683100e68 WatchSource:0}: Error finding container 130e04f75baccd5914a1944b9cd4f8ba019cf924ecee2ee767c1fac683100e68: Status 404 returned error can't find the container with id 130e04f75baccd5914a1944b9cd4f8ba019cf924ecee2ee767c1fac683100e68 Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.683308 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/crc-debug-fv87d" event={"ID":"a2273c43-973b-4dd6-bab9-542937931d50","Type":"ContainerStarted","Data":"130e04f75baccd5914a1944b9cd4f8ba019cf924ecee2ee767c1fac683100e68"} Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.701075 4580 scope.go:117] "RemoveContainer" containerID="c3cd2c264758b029bc2dcf323a88b13dd6d66de433dbfc0d498240588af200eb" Jan 12 13:59:03 crc kubenswrapper[4580]: I0112 13:59:03.701117 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-k7l6r" Jan 12 13:59:04 crc kubenswrapper[4580]: I0112 13:59:04.714212 4580 generic.go:334] "Generic (PLEG): container finished" podID="a2273c43-973b-4dd6-bab9-542937931d50" containerID="1c21ebcb11d4a8389486c4daff594be72a639865a6a4470066fe80a9fc19d4a8" exitCode=0 Jan 12 13:59:04 crc kubenswrapper[4580]: I0112 13:59:04.714271 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/crc-debug-fv87d" event={"ID":"a2273c43-973b-4dd6-bab9-542937931d50","Type":"ContainerDied","Data":"1c21ebcb11d4a8389486c4daff594be72a639865a6a4470066fe80a9fc19d4a8"} Jan 12 13:59:04 crc kubenswrapper[4580]: I0112 13:59:04.747626 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-fv87d"] Jan 12 13:59:04 crc kubenswrapper[4580]: I0112 13:59:04.756351 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ftvbc/crc-debug-fv87d"] Jan 12 13:59:05 crc kubenswrapper[4580]: I0112 13:59:05.800982 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:05 crc kubenswrapper[4580]: I0112 13:59:05.882148 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqxjh\" (UniqueName: \"kubernetes.io/projected/a2273c43-973b-4dd6-bab9-542937931d50-kube-api-access-pqxjh\") pod \"a2273c43-973b-4dd6-bab9-542937931d50\" (UID: \"a2273c43-973b-4dd6-bab9-542937931d50\") " Jan 12 13:59:05 crc kubenswrapper[4580]: I0112 13:59:05.882406 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2273c43-973b-4dd6-bab9-542937931d50-host\") pod \"a2273c43-973b-4dd6-bab9-542937931d50\" (UID: \"a2273c43-973b-4dd6-bab9-542937931d50\") " Jan 12 13:59:05 crc kubenswrapper[4580]: I0112 13:59:05.882483 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2273c43-973b-4dd6-bab9-542937931d50-host" (OuterVolumeSpecName: "host") pod "a2273c43-973b-4dd6-bab9-542937931d50" (UID: "a2273c43-973b-4dd6-bab9-542937931d50"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 12 13:59:05 crc kubenswrapper[4580]: I0112 13:59:05.883056 4580 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2273c43-973b-4dd6-bab9-542937931d50-host\") on node \"crc\" DevicePath \"\"" Jan 12 13:59:05 crc kubenswrapper[4580]: I0112 13:59:05.888354 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2273c43-973b-4dd6-bab9-542937931d50-kube-api-access-pqxjh" (OuterVolumeSpecName: "kube-api-access-pqxjh") pod "a2273c43-973b-4dd6-bab9-542937931d50" (UID: "a2273c43-973b-4dd6-bab9-542937931d50"). InnerVolumeSpecName "kube-api-access-pqxjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 13:59:05 crc kubenswrapper[4580]: I0112 13:59:05.984731 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqxjh\" (UniqueName: \"kubernetes.io/projected/a2273c43-973b-4dd6-bab9-542937931d50-kube-api-access-pqxjh\") on node \"crc\" DevicePath \"\"" Jan 12 13:59:06 crc kubenswrapper[4580]: I0112 13:59:06.735303 4580 scope.go:117] "RemoveContainer" containerID="1c21ebcb11d4a8389486c4daff594be72a639865a6a4470066fe80a9fc19d4a8" Jan 12 13:59:06 crc kubenswrapper[4580]: I0112 13:59:06.735362 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/crc-debug-fv87d" Jan 12 13:59:07 crc kubenswrapper[4580]: I0112 13:59:07.129331 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-c9fsw_e8e0f177-af2a-4975-a047-6d66bcd9b474/cert-manager-controller/0.log" Jan 12 13:59:07 crc kubenswrapper[4580]: I0112 13:59:07.140754 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-56nml_5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5/cert-manager-cainjector/0.log" Jan 12 13:59:07 crc kubenswrapper[4580]: I0112 13:59:07.150573 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dkts4_56ef0925-27e4-4a8f-9a56-3e31c7176270/cert-manager-webhook/0.log" Jan 12 13:59:07 crc kubenswrapper[4580]: I0112 13:59:07.291633 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2273c43-973b-4dd6-bab9-542937931d50" path="/var/lib/kubelet/pods/a2273c43-973b-4dd6-bab9-542937931d50/volumes" Jan 12 13:59:11 crc kubenswrapper[4580]: I0112 13:59:11.612935 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-sngpg_4ce8457b-77a4-4703-b3e8-2a929d02d38d/nmstate-console-plugin/0.log" Jan 12 13:59:11 crc kubenswrapper[4580]: I0112 13:59:11.627805 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-66q7w_714937bd-e28b-4368-8f23-c141e40ea81f/nmstate-handler/0.log" Jan 12 13:59:11 crc kubenswrapper[4580]: I0112 13:59:11.639058 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-t97w5_a7c37982-a0fd-4f9d-950a-ec589bb9753c/nmstate-metrics/0.log" Jan 12 13:59:11 crc kubenswrapper[4580]: I0112 13:59:11.647674 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-t97w5_a7c37982-a0fd-4f9d-950a-ec589bb9753c/kube-rbac-proxy/0.log" Jan 12 13:59:11 crc kubenswrapper[4580]: I0112 13:59:11.664142 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-p62jb_49e54acb-8939-4c86-b9a4-42741a3356ac/nmstate-operator/0.log" Jan 12 13:59:11 crc kubenswrapper[4580]: I0112 13:59:11.672137 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-b4qbk_00b7df68-abb5-4b70-b6ef-1495cb7a4725/nmstate-webhook/0.log" Jan 12 13:59:16 crc kubenswrapper[4580]: I0112 13:59:16.282322 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 13:59:16 crc kubenswrapper[4580]: E0112 13:59:16.282881 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:59:21 crc kubenswrapper[4580]: I0112 13:59:21.116172 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/controller/0.log" Jan 12 13:59:21 crc kubenswrapper[4580]: I0112 13:59:21.126859 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/kube-rbac-proxy/0.log" Jan 12 13:59:21 crc kubenswrapper[4580]: I0112 13:59:21.150940 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/controller/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.438013 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.448687 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/reloader/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.453309 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr-metrics/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.459546 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.466921 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy-frr/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.473506 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-frr-files/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.478661 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-reloader/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.483841 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-metrics/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.495297 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-pkj9q_83e9a47a-80d6-4d28-ae2d-da27e069932f/frr-k8s-webhook-server/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.517328 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-797bb7bf75-nrxgs_9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b/manager/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.527197 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66dd5b5c84-2pdlb_cc9a79e4-90bc-4e70-afac-8b20ec13504f/webhook-server/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.906912 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/speaker/0.log" Jan 12 13:59:22 crc kubenswrapper[4580]: I0112 13:59:22.917837 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/kube-rbac-proxy/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.064789 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866_3b6ceadd-6368-43ec-9666-7dff30d5ee95/extract/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.074678 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866_3b6ceadd-6368-43ec-9666-7dff30d5ee95/util/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.081791 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4qz866_3b6ceadd-6368-43ec-9666-7dff30d5ee95/pull/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.092452 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx_0089b37b-5f6c-4719-98a0-169570a8cfa6/extract/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.096202 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx_0089b37b-5f6c-4719-98a0-169570a8cfa6/util/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.101937 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa82mfpx_0089b37b-5f6c-4719-98a0-169570a8cfa6/pull/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.431832 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-89lkz_45d72a58-4072-4c37-95c8-b4668060c64c/registry-server/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.436673 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-89lkz_45d72a58-4072-4c37-95c8-b4668060c64c/extract-utilities/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.443838 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-89lkz_45d72a58-4072-4c37-95c8-b4668060c64c/extract-content/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.932445 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbw9_52363f5a-5d4c-406b-bd57-cbde5f393c2c/registry-server/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.938262 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbw9_52363f5a-5d4c-406b-bd57-cbde5f393c2c/extract-utilities/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.944004 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbw9_52363f5a-5d4c-406b-bd57-cbde5f393c2c/extract-content/0.log" Jan 12 13:59:26 crc kubenswrapper[4580]: I0112 13:59:26.955148 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-n599t_53e207fa-a98f-4554-8ed8-67ffaa6e5955/marketplace-operator/0.log" Jan 12 13:59:27 crc kubenswrapper[4580]: I0112 13:59:27.082870 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w5lwr_5677e888-c379-4713-bcf6-e2e31288a0b6/registry-server/0.log" Jan 12 13:59:27 crc kubenswrapper[4580]: I0112 13:59:27.087698 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w5lwr_5677e888-c379-4713-bcf6-e2e31288a0b6/extract-utilities/0.log" Jan 12 13:59:27 crc kubenswrapper[4580]: I0112 13:59:27.093075 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w5lwr_5677e888-c379-4713-bcf6-e2e31288a0b6/extract-content/0.log" Jan 12 13:59:27 crc kubenswrapper[4580]: I0112 13:59:27.600586 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ts2q4_2ae20335-c7d3-46ef-84e6-129bc0550ab4/registry-server/0.log" Jan 12 13:59:27 crc kubenswrapper[4580]: I0112 13:59:27.606092 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ts2q4_2ae20335-c7d3-46ef-84e6-129bc0550ab4/extract-utilities/0.log" Jan 12 13:59:27 crc kubenswrapper[4580]: I0112 13:59:27.613736 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ts2q4_2ae20335-c7d3-46ef-84e6-129bc0550ab4/extract-content/0.log" Jan 12 13:59:28 crc kubenswrapper[4580]: I0112 13:59:28.281854 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 13:59:28 crc kubenswrapper[4580]: E0112 13:59:28.282278 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:59:39 crc kubenswrapper[4580]: I0112 13:59:39.285347 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 13:59:39 crc kubenswrapper[4580]: E0112 13:59:39.285946 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 13:59:54 crc kubenswrapper[4580]: I0112 13:59:54.282137 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 13:59:54 crc kubenswrapper[4580]: E0112 13:59:54.282916 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.137248 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z"] Jan 12 14:00:00 crc kubenswrapper[4580]: E0112 14:00:00.138224 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2273c43-973b-4dd6-bab9-542937931d50" containerName="container-00" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.138239 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2273c43-973b-4dd6-bab9-542937931d50" containerName="container-00" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.138421 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2273c43-973b-4dd6-bab9-542937931d50" containerName="container-00" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.139074 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.140483 4580 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.141025 4580 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.148652 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z"] Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.222760 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2de69ee6-b6d6-452b-aa26-92e06e8525c8-secret-volume\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.222940 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2de69ee6-b6d6-452b-aa26-92e06e8525c8-config-volume\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.222990 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94x49\" (UniqueName: \"kubernetes.io/projected/2de69ee6-b6d6-452b-aa26-92e06e8525c8-kube-api-access-94x49\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.324538 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2de69ee6-b6d6-452b-aa26-92e06e8525c8-config-volume\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.325211 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94x49\" (UniqueName: \"kubernetes.io/projected/2de69ee6-b6d6-452b-aa26-92e06e8525c8-kube-api-access-94x49\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.325264 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2de69ee6-b6d6-452b-aa26-92e06e8525c8-secret-volume\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.325455 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2de69ee6-b6d6-452b-aa26-92e06e8525c8-config-volume\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.330904 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2de69ee6-b6d6-452b-aa26-92e06e8525c8-secret-volume\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.340554 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94x49\" (UniqueName: \"kubernetes.io/projected/2de69ee6-b6d6-452b-aa26-92e06e8525c8-kube-api-access-94x49\") pod \"collect-profiles-29470440-7jc6z\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.455994 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:00 crc kubenswrapper[4580]: I0112 14:00:00.965608 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z"] Jan 12 14:00:01 crc kubenswrapper[4580]: I0112 14:00:01.190600 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" event={"ID":"2de69ee6-b6d6-452b-aa26-92e06e8525c8","Type":"ContainerStarted","Data":"a5a5a3e2aa453f314b0d082ae4ec188cf716a72e19b7732a1dda2194b1de0297"} Jan 12 14:00:01 crc kubenswrapper[4580]: I0112 14:00:01.191597 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" event={"ID":"2de69ee6-b6d6-452b-aa26-92e06e8525c8","Type":"ContainerStarted","Data":"a220dc5fade35ccfdc3123d22f26f2365915f8162810438d2361602d51a6616c"} Jan 12 14:00:01 crc kubenswrapper[4580]: I0112 14:00:01.212239 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" podStartSLOduration=1.212227393 podStartE2EDuration="1.212227393s" podCreationTimestamp="2026-01-12 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 14:00:01.20377497 +0000 UTC m=+3200.247993659" watchObservedRunningTime="2026-01-12 14:00:01.212227393 +0000 UTC m=+3200.256446083" Jan 12 14:00:02 crc kubenswrapper[4580]: I0112 14:00:02.199186 4580 generic.go:334] "Generic (PLEG): container finished" podID="2de69ee6-b6d6-452b-aa26-92e06e8525c8" containerID="a5a5a3e2aa453f314b0d082ae4ec188cf716a72e19b7732a1dda2194b1de0297" exitCode=0 Jan 12 14:00:02 crc kubenswrapper[4580]: I0112 14:00:02.199297 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" event={"ID":"2de69ee6-b6d6-452b-aa26-92e06e8525c8","Type":"ContainerDied","Data":"a5a5a3e2aa453f314b0d082ae4ec188cf716a72e19b7732a1dda2194b1de0297"} Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.492264 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.501699 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2de69ee6-b6d6-452b-aa26-92e06e8525c8-secret-volume\") pod \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.502178 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2de69ee6-b6d6-452b-aa26-92e06e8525c8-config-volume\") pod \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.502332 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94x49\" (UniqueName: \"kubernetes.io/projected/2de69ee6-b6d6-452b-aa26-92e06e8525c8-kube-api-access-94x49\") pod \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\" (UID: \"2de69ee6-b6d6-452b-aa26-92e06e8525c8\") " Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.502694 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de69ee6-b6d6-452b-aa26-92e06e8525c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "2de69ee6-b6d6-452b-aa26-92e06e8525c8" (UID: "2de69ee6-b6d6-452b-aa26-92e06e8525c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.503617 4580 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2de69ee6-b6d6-452b-aa26-92e06e8525c8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.508884 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de69ee6-b6d6-452b-aa26-92e06e8525c8-kube-api-access-94x49" (OuterVolumeSpecName: "kube-api-access-94x49") pod "2de69ee6-b6d6-452b-aa26-92e06e8525c8" (UID: "2de69ee6-b6d6-452b-aa26-92e06e8525c8"). InnerVolumeSpecName "kube-api-access-94x49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.519213 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de69ee6-b6d6-452b-aa26-92e06e8525c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2de69ee6-b6d6-452b-aa26-92e06e8525c8" (UID: "2de69ee6-b6d6-452b-aa26-92e06e8525c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.605198 4580 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2de69ee6-b6d6-452b-aa26-92e06e8525c8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 12 14:00:03 crc kubenswrapper[4580]: I0112 14:00:03.605236 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94x49\" (UniqueName: \"kubernetes.io/projected/2de69ee6-b6d6-452b-aa26-92e06e8525c8-kube-api-access-94x49\") on node \"crc\" DevicePath \"\"" Jan 12 14:00:04 crc kubenswrapper[4580]: I0112 14:00:04.217718 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" event={"ID":"2de69ee6-b6d6-452b-aa26-92e06e8525c8","Type":"ContainerDied","Data":"a220dc5fade35ccfdc3123d22f26f2365915f8162810438d2361602d51a6616c"} Jan 12 14:00:04 crc kubenswrapper[4580]: I0112 14:00:04.217768 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a220dc5fade35ccfdc3123d22f26f2365915f8162810438d2361602d51a6616c" Jan 12 14:00:04 crc kubenswrapper[4580]: I0112 14:00:04.217892 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29470440-7jc6z" Jan 12 14:00:04 crc kubenswrapper[4580]: I0112 14:00:04.262482 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22"] Jan 12 14:00:04 crc kubenswrapper[4580]: I0112 14:00:04.268562 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29470395-cvc22"] Jan 12 14:00:05 crc kubenswrapper[4580]: I0112 14:00:05.290503 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04066069-62ab-4458-8b6b-620f8bc9ed91" path="/var/lib/kubelet/pods/04066069-62ab-4458-8b6b-620f8bc9ed91/volumes" Jan 12 14:00:08 crc kubenswrapper[4580]: I0112 14:00:08.281562 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:00:08 crc kubenswrapper[4580]: E0112 14:00:08.282703 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:00:13 crc kubenswrapper[4580]: I0112 14:00:13.883957 4580 scope.go:117] "RemoveContainer" containerID="697d5d73b37c80dfb37525a935c1e98c6e2f2bbf20a5b0877c67a632a077406d" Jan 12 14:00:19 crc kubenswrapper[4580]: I0112 14:00:19.281820 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:00:19 crc kubenswrapper[4580]: E0112 14:00:19.282460 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:00:24 crc kubenswrapper[4580]: I0112 14:00:24.290051 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/controller/0.log" Jan 12 14:00:24 crc kubenswrapper[4580]: I0112 14:00:24.296328 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-rxqqw_54899288-3291-42c0-969e-f22dab071c51/kube-rbac-proxy/0.log" Jan 12 14:00:24 crc kubenswrapper[4580]: I0112 14:00:24.311115 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/controller/0.log" Jan 12 14:00:24 crc kubenswrapper[4580]: I0112 14:00:24.432277 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-c9fsw_e8e0f177-af2a-4975-a047-6d66bcd9b474/cert-manager-controller/0.log" Jan 12 14:00:24 crc kubenswrapper[4580]: I0112 14:00:24.449537 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-56nml_5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5/cert-manager-cainjector/0.log" Jan 12 14:00:24 crc kubenswrapper[4580]: I0112 14:00:24.458550 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dkts4_56ef0925-27e4-4a8f-9a56-3e31c7176270/cert-manager-webhook/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.389469 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c697f55f8-69mz9_7ed21cbb-5825-4538-bfb6-74f895189d83/manager/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.470046 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-4b7c9_63a3c1f8-84b5-4648-9a74-bc1e980d5a57/manager/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.488311 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-plhvp_cbfff7ce-c184-4dee-94d5-c6ee41fc2b75/manager/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.499842 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/extract/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.504278 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/util/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.536188 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/pull/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.646048 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.655753 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/reloader/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.666773 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/frr-metrics/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.674519 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.690491 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/kube-rbac-proxy-frr/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.691459 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-75b858dccc-nr2g4_b3716289-2aa2-4e39-b8db-7980564c976e/manager/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.699071 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-frr-files/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.700027 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6cd7bcb4bf-nvbml_8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01/manager/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.707955 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-reloader/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.722302 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2n6tl_68e356d2-e51a-494b-a2cc-2c8491e4d8c8/cp-metrics/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.723719 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-75cb9467dc-r22fp_218c7ab4-85b0-4609-87e6-35d51283e5e0/manager/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.735851 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-pkj9q_83e9a47a-80d6-4d28-ae2d-da27e069932f/frr-k8s-webhook-server/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.762926 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-797bb7bf75-nrxgs_9e077054-5ff2-4f6f-a2bc-12a4b78a0c6b/manager/0.log" Jan 12 14:00:25 crc kubenswrapper[4580]: I0112 14:00:25.780274 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66dd5b5c84-2pdlb_cc9a79e4-90bc-4e70-afac-8b20ec13504f/webhook-server/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.218942 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-2sg8z_1135f51b-1f4e-4866-bb7d-728be53f5be7/manager/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.233698 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-plnl2_0286f995-6c82-4417-8a67-91b5e261a211/manager/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.335608 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fckr8_726a74db-a499-4c38-8258-b711bc0dc30b/manager/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.345317 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/speaker/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.348403 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6684f856f9-w2xhg_bf14de2d-3f35-4c32-905c-0a133a4fbafe/manager/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.353385 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qbcvq_6c632190-70af-4fb2-97f2-3ea2cddf0302/kube-rbac-proxy/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.386266 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8fxxj_87188751-ba97-4f25-ba2c-70514594cb4a/manager/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.428854 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ckfhs_2d0f98f6-67ec-4253-a344-8aa185679126/manager/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.500330 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5977959f9c-sgg8q_eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b/manager/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.512408 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nbzhm_7209eb4d-53dc-4c30-9b80-8863acbea5a6/manager/0.log" Jan 12 14:00:26 crc kubenswrapper[4580]: I0112 14:00:26.537073 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-654686dcb9z5ths_ccb61890-3cf7-45aa-974c-693f0d14c14a/manager/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.184370 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-c9fsw_e8e0f177-af2a-4975-a047-6d66bcd9b474/cert-manager-controller/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.201710 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-56nml_5ea31dc0-a9ca-4c74-b2aa-7999ef2b94f5/cert-manager-cainjector/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.208534 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dkts4_56ef0925-27e4-4a8f-9a56-3e31c7176270/cert-manager-webhook/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.603478 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6659c7dc85-4p8jr_87237fc1-15cd-4dd9-bcfe-5a334d366896/manager/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.692933 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf8b477cb-hwd8t_bca66d95-723d-4cd6-bc4c-1a0c564606f3/operator/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.743257 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-248h5_c0f0a657-34a2-4619-b992-64ab017e6ecb/registry-server/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.791970 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-cf664874d-vznwd_742c889f-d87d-4d61-82f8-2fa3ffc3d6b2/manager/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.829204 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78c6bccb56-mggmh_f50c1909-7ba3-4d92-9e4e-2cbd2602e340/manager/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.853857 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-p4m8m_520c9385-c952-45a9-b1ce-2ad913758239/operator/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.875761 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6469d85bcb-smn7v_6a4af572-980a-4c9b-8d01-df30e894dcda/manager/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.918369 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74bd5457c5-95bcj_ed127163-4a57-4b95-9dd7-4c856bd3d126/manager/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.927967 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-698b874cb5-4v5jb_00ccc719-ee01-4ff4-934b-6e6fbadaa57c/manager/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.939531 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-2z5v7_56a7e345-fce1-44a5-aab4-8d82293bd5ee/manager/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.978327 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jbnkd_3cd49599-ac6f-4d9f-9d86-2f6ff90ddbf9/control-plane-machine-set-operator/0.log" Jan 12 14:00:27 crc kubenswrapper[4580]: I0112 14:00:27.991577 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89mg9_bdbff407-68ae-456c-b67e-40d0e47fba7b/kube-rbac-proxy/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.003324 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-89mg9_bdbff407-68ae-456c-b67e-40d0e47fba7b/machine-api-operator/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.660213 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c697f55f8-69mz9_7ed21cbb-5825-4538-bfb6-74f895189d83/manager/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.692630 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-4b7c9_63a3c1f8-84b5-4648-9a74-bc1e980d5a57/manager/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.703827 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-plhvp_cbfff7ce-c184-4dee-94d5-c6ee41fc2b75/manager/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.717659 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/extract/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.723525 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/util/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.733598 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5432efd0c40cfd67b5b87e56150fca567dbd15dd757d120066b94ee44pwmns_d27f45e7-f85b-4b23-b849-8e1778cfe3df/pull/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.819483 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-75b858dccc-nr2g4_b3716289-2aa2-4e39-b8db-7980564c976e/manager/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.828634 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6cd7bcb4bf-nvbml_8cf46bb8-ed1f-491d-90e3-1ef5ebbdfb01/manager/0.log" Jan 12 14:00:28 crc kubenswrapper[4580]: I0112 14:00:28.852882 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-75cb9467dc-r22fp_218c7ab4-85b0-4609-87e6-35d51283e5e0/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.067810 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-sngpg_4ce8457b-77a4-4703-b3e8-2a929d02d38d/nmstate-console-plugin/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.082290 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-66q7w_714937bd-e28b-4368-8f23-c141e40ea81f/nmstate-handler/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.091437 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-2sg8z_1135f51b-1f4e-4866-bb7d-728be53f5be7/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.096525 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-t97w5_a7c37982-a0fd-4f9d-950a-ec589bb9753c/nmstate-metrics/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.101539 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-plnl2_0286f995-6c82-4417-8a67-91b5e261a211/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.105143 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-t97w5_a7c37982-a0fd-4f9d-950a-ec589bb9753c/kube-rbac-proxy/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.118571 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-p62jb_49e54acb-8939-4c86-b9a4-42741a3356ac/nmstate-operator/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.130206 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-b4qbk_00b7df68-abb5-4b70-b6ef-1495cb7a4725/nmstate-webhook/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.165907 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fckr8_726a74db-a499-4c38-8258-b711bc0dc30b/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.178862 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6684f856f9-w2xhg_bf14de2d-3f35-4c32-905c-0a133a4fbafe/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.206514 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8fxxj_87188751-ba97-4f25-ba2c-70514594cb4a/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.245915 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ckfhs_2d0f98f6-67ec-4253-a344-8aa185679126/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.317042 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5977959f9c-sgg8q_eb01c7cd-f8d5-414f-a9f1-cf75a7a6ac1b/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.326183 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-nbzhm_7209eb4d-53dc-4c30-9b80-8863acbea5a6/manager/0.log" Jan 12 14:00:29 crc kubenswrapper[4580]: I0112 14:00:29.336738 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-654686dcb9z5ths_ccb61890-3cf7-45aa-974c-693f0d14c14a/manager/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.498368 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6659c7dc85-4p8jr_87237fc1-15cd-4dd9-bcfe-5a334d366896/manager/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.638234 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5bf8b477cb-hwd8t_bca66d95-723d-4cd6-bc4c-1a0c564606f3/operator/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.687080 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-248h5_c0f0a657-34a2-4619-b992-64ab017e6ecb/registry-server/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.744771 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-cf664874d-vznwd_742c889f-d87d-4d61-82f8-2fa3ffc3d6b2/manager/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.768927 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78c6bccb56-mggmh_f50c1909-7ba3-4d92-9e4e-2cbd2602e340/manager/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.786562 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-p4m8m_520c9385-c952-45a9-b1ce-2ad913758239/operator/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.808717 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6469d85bcb-smn7v_6a4af572-980a-4c9b-8d01-df30e894dcda/manager/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.869627 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74bd5457c5-95bcj_ed127163-4a57-4b95-9dd7-4c856bd3d126/manager/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.880545 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-698b874cb5-4v5jb_00ccc719-ee01-4ff4-934b-6e6fbadaa57c/manager/0.log" Jan 12 14:00:30 crc kubenswrapper[4580]: I0112 14:00:30.892213 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-2z5v7_56a7e345-fce1-44a5-aab4-8d82293bd5ee/manager/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.282009 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:00:32 crc kubenswrapper[4580]: E0112 14:00:32.282359 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.453068 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/kube-multus-additional-cni-plugins/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.458946 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/egress-router-binary-copy/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.466021 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/cni-plugins/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.472009 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/bond-cni-plugin/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.480904 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/routeoverride-cni/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.486004 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/whereabouts-cni-bincopy/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.492539 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2p6r8_d2223aac-784e-4653-8939-fcbd18c70ba7/whereabouts-cni/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.520434 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-2zrh8_e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1/multus-admission-controller/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.525014 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-2zrh8_e7bdddf2-1c7b-4aa3-81f9-9df58a6e92b1/kube-rbac-proxy/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.593779 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/2.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.655630 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nnz5s_c8f39bcc-5a25-4746-988b-2251fd1be8c9/kube-multus/3.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.681196 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-jw27h_5066d8fa-2cee-4764-a817-b819d3876638/network-metrics-daemon/0.log" Jan 12 14:00:32 crc kubenswrapper[4580]: I0112 14:00:32.686346 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-jw27h_5066d8fa-2cee-4764-a817-b819d3876638/kube-rbac-proxy/0.log" Jan 12 14:00:47 crc kubenswrapper[4580]: I0112 14:00:47.282487 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:00:47 crc kubenswrapper[4580]: E0112 14:00:47.283371 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:00:59 crc kubenswrapper[4580]: I0112 14:00:59.281762 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:00:59 crc kubenswrapper[4580]: E0112 14:00:59.282538 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.138850 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29470441-5v28q"] Jan 12 14:01:00 crc kubenswrapper[4580]: E0112 14:01:00.139253 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de69ee6-b6d6-452b-aa26-92e06e8525c8" containerName="collect-profiles" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.139270 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de69ee6-b6d6-452b-aa26-92e06e8525c8" containerName="collect-profiles" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.139477 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de69ee6-b6d6-452b-aa26-92e06e8525c8" containerName="collect-profiles" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.140174 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.153979 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29470441-5v28q"] Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.334411 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw5mw\" (UniqueName: \"kubernetes.io/projected/a7af5b29-7fe2-471d-a27a-a3af652c56fa-kube-api-access-vw5mw\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.334747 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-config-data\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.334921 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-combined-ca-bundle\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.335068 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-fernet-keys\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.436994 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw5mw\" (UniqueName: \"kubernetes.io/projected/a7af5b29-7fe2-471d-a27a-a3af652c56fa-kube-api-access-vw5mw\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.437462 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-config-data\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.437568 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-combined-ca-bundle\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.437650 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-fernet-keys\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.445597 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-config-data\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.446161 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-fernet-keys\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.450045 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-combined-ca-bundle\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.453547 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw5mw\" (UniqueName: \"kubernetes.io/projected/a7af5b29-7fe2-471d-a27a-a3af652c56fa-kube-api-access-vw5mw\") pod \"keystone-cron-29470441-5v28q\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.468800 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:00 crc kubenswrapper[4580]: I0112 14:01:00.890258 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29470441-5v28q"] Jan 12 14:01:01 crc kubenswrapper[4580]: I0112 14:01:01.716392 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29470441-5v28q" event={"ID":"a7af5b29-7fe2-471d-a27a-a3af652c56fa","Type":"ContainerStarted","Data":"68e5257d32f121b5a5737c74a8bd698ebb4e5b2cbe0fc58a64fec9ff8d307e00"} Jan 12 14:01:01 crc kubenswrapper[4580]: I0112 14:01:01.716934 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29470441-5v28q" event={"ID":"a7af5b29-7fe2-471d-a27a-a3af652c56fa","Type":"ContainerStarted","Data":"25fb5c1ef0225eaa9c480bf34d4599acf5b6aef035a596a66fcdcc98eeda49a8"} Jan 12 14:01:01 crc kubenswrapper[4580]: I0112 14:01:01.739030 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29470441-5v28q" podStartSLOduration=1.739013852 podStartE2EDuration="1.739013852s" podCreationTimestamp="2026-01-12 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-12 14:01:01.732489464 +0000 UTC m=+3260.776708155" watchObservedRunningTime="2026-01-12 14:01:01.739013852 +0000 UTC m=+3260.783232542" Jan 12 14:01:03 crc kubenswrapper[4580]: I0112 14:01:03.731254 4580 generic.go:334] "Generic (PLEG): container finished" podID="a7af5b29-7fe2-471d-a27a-a3af652c56fa" containerID="68e5257d32f121b5a5737c74a8bd698ebb4e5b2cbe0fc58a64fec9ff8d307e00" exitCode=0 Jan 12 14:01:03 crc kubenswrapper[4580]: I0112 14:01:03.731334 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29470441-5v28q" event={"ID":"a7af5b29-7fe2-471d-a27a-a3af652c56fa","Type":"ContainerDied","Data":"68e5257d32f121b5a5737c74a8bd698ebb4e5b2cbe0fc58a64fec9ff8d307e00"} Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.044742 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.165936 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-combined-ca-bundle\") pod \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.166896 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw5mw\" (UniqueName: \"kubernetes.io/projected/a7af5b29-7fe2-471d-a27a-a3af652c56fa-kube-api-access-vw5mw\") pod \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.167263 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-fernet-keys\") pod \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.167438 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-config-data\") pod \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\" (UID: \"a7af5b29-7fe2-471d-a27a-a3af652c56fa\") " Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.182293 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a7af5b29-7fe2-471d-a27a-a3af652c56fa" (UID: "a7af5b29-7fe2-471d-a27a-a3af652c56fa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.182328 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7af5b29-7fe2-471d-a27a-a3af652c56fa-kube-api-access-vw5mw" (OuterVolumeSpecName: "kube-api-access-vw5mw") pod "a7af5b29-7fe2-471d-a27a-a3af652c56fa" (UID: "a7af5b29-7fe2-471d-a27a-a3af652c56fa"). InnerVolumeSpecName "kube-api-access-vw5mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.196248 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7af5b29-7fe2-471d-a27a-a3af652c56fa" (UID: "a7af5b29-7fe2-471d-a27a-a3af652c56fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.215852 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-config-data" (OuterVolumeSpecName: "config-data") pod "a7af5b29-7fe2-471d-a27a-a3af652c56fa" (UID: "a7af5b29-7fe2-471d-a27a-a3af652c56fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.271324 4580 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.271356 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw5mw\" (UniqueName: \"kubernetes.io/projected/a7af5b29-7fe2-471d-a27a-a3af652c56fa-kube-api-access-vw5mw\") on node \"crc\" DevicePath \"\"" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.271369 4580 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.271378 4580 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7af5b29-7fe2-471d-a27a-a3af652c56fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.756515 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29470441-5v28q" event={"ID":"a7af5b29-7fe2-471d-a27a-a3af652c56fa","Type":"ContainerDied","Data":"25fb5c1ef0225eaa9c480bf34d4599acf5b6aef035a596a66fcdcc98eeda49a8"} Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.756896 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25fb5c1ef0225eaa9c480bf34d4599acf5b6aef035a596a66fcdcc98eeda49a8" Jan 12 14:01:05 crc kubenswrapper[4580]: I0112 14:01:05.756824 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29470441-5v28q" Jan 12 14:01:11 crc kubenswrapper[4580]: I0112 14:01:11.294682 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:01:11 crc kubenswrapper[4580]: E0112 14:01:11.296462 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:01:13 crc kubenswrapper[4580]: I0112 14:01:13.955794 4580 scope.go:117] "RemoveContainer" containerID="fa067f1525f8660db7774e8c4efc0bd1ac1bba9803e93fad6026b14795cbfda8" Jan 12 14:01:21 crc kubenswrapper[4580]: I0112 14:01:21.904516 4580 generic.go:334] "Generic (PLEG): container finished" podID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerID="59ab882425a7683f9268b29e18b90d3289323638c87c394cbf5a603593f2a2b4" exitCode=0 Jan 12 14:01:21 crc kubenswrapper[4580]: I0112 14:01:21.904609 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" event={"ID":"cc5d2d66-2d38-480f-aa31-f9ef36f839c5","Type":"ContainerDied","Data":"59ab882425a7683f9268b29e18b90d3289323638c87c394cbf5a603593f2a2b4"} Jan 12 14:01:21 crc kubenswrapper[4580]: I0112 14:01:21.906223 4580 scope.go:117] "RemoveContainer" containerID="59ab882425a7683f9268b29e18b90d3289323638c87c394cbf5a603593f2a2b4" Jan 12 14:01:22 crc kubenswrapper[4580]: I0112 14:01:22.612412 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ftvbc_must-gather-cpgcp_cc5d2d66-2d38-480f-aa31-f9ef36f839c5/gather/0.log" Jan 12 14:01:23 crc kubenswrapper[4580]: I0112 14:01:23.282266 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:01:23 crc kubenswrapper[4580]: E0112 14:01:23.282571 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:01:30 crc kubenswrapper[4580]: I0112 14:01:30.599637 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ftvbc/must-gather-cpgcp"] Jan 12 14:01:30 crc kubenswrapper[4580]: I0112 14:01:30.600332 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" podUID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerName="copy" containerID="cri-o://970bfbaef612a047f6e75c7b963e83b033be9fc1bbebfdbd20ce6136ff6fb9eb" gracePeriod=2 Jan 12 14:01:30 crc kubenswrapper[4580]: I0112 14:01:30.607151 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ftvbc/must-gather-cpgcp"] Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.029601 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ftvbc_must-gather-cpgcp_cc5d2d66-2d38-480f-aa31-f9ef36f839c5/copy/0.log" Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.031347 4580 generic.go:334] "Generic (PLEG): container finished" podID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerID="970bfbaef612a047f6e75c7b963e83b033be9fc1bbebfdbd20ce6136ff6fb9eb" exitCode=143 Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.031397 4580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa55faff8a9f180a8051fc9f2dfd53cf69d1b2d642e16f1089ee8b466371593f" Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.048529 4580 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ftvbc_must-gather-cpgcp_cc5d2d66-2d38-480f-aa31-f9ef36f839c5/copy/0.log" Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.052042 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.165582 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pq5k\" (UniqueName: \"kubernetes.io/projected/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-kube-api-access-5pq5k\") pod \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\" (UID: \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\") " Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.165749 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-must-gather-output\") pod \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\" (UID: \"cc5d2d66-2d38-480f-aa31-f9ef36f839c5\") " Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.172142 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-kube-api-access-5pq5k" (OuterVolumeSpecName: "kube-api-access-5pq5k") pod "cc5d2d66-2d38-480f-aa31-f9ef36f839c5" (UID: "cc5d2d66-2d38-480f-aa31-f9ef36f839c5"). InnerVolumeSpecName "kube-api-access-5pq5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.268570 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pq5k\" (UniqueName: \"kubernetes.io/projected/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-kube-api-access-5pq5k\") on node \"crc\" DevicePath \"\"" Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.327209 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cc5d2d66-2d38-480f-aa31-f9ef36f839c5" (UID: "cc5d2d66-2d38-480f-aa31-f9ef36f839c5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 14:01:31 crc kubenswrapper[4580]: I0112 14:01:31.371921 4580 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc5d2d66-2d38-480f-aa31-f9ef36f839c5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 12 14:01:32 crc kubenswrapper[4580]: I0112 14:01:32.040403 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ftvbc/must-gather-cpgcp" Jan 12 14:01:33 crc kubenswrapper[4580]: I0112 14:01:33.300470 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" path="/var/lib/kubelet/pods/cc5d2d66-2d38-480f-aa31-f9ef36f839c5/volumes" Jan 12 14:01:37 crc kubenswrapper[4580]: I0112 14:01:37.282605 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:01:37 crc kubenswrapper[4580]: E0112 14:01:37.283343 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:01:52 crc kubenswrapper[4580]: I0112 14:01:52.282438 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:01:52 crc kubenswrapper[4580]: E0112 14:01:52.283337 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:02:05 crc kubenswrapper[4580]: I0112 14:02:05.282794 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:02:05 crc kubenswrapper[4580]: E0112 14:02:05.283760 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:02:20 crc kubenswrapper[4580]: I0112 14:02:20.283622 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:02:20 crc kubenswrapper[4580]: E0112 14:02:20.284622 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:02:34 crc kubenswrapper[4580]: I0112 14:02:34.281660 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:02:34 crc kubenswrapper[4580]: E0112 14:02:34.282722 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.833459 4580 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vp6k8"] Jan 12 14:02:42 crc kubenswrapper[4580]: E0112 14:02:42.835341 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerName="gather" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.835422 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerName="gather" Jan 12 14:02:42 crc kubenswrapper[4580]: E0112 14:02:42.835504 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerName="copy" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.835561 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerName="copy" Jan 12 14:02:42 crc kubenswrapper[4580]: E0112 14:02:42.835641 4580 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7af5b29-7fe2-471d-a27a-a3af652c56fa" containerName="keystone-cron" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.835693 4580 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7af5b29-7fe2-471d-a27a-a3af652c56fa" containerName="keystone-cron" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.835919 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerName="copy" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.835985 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5d2d66-2d38-480f-aa31-f9ef36f839c5" containerName="gather" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.836039 4580 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7af5b29-7fe2-471d-a27a-a3af652c56fa" containerName="keystone-cron" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.842441 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.843821 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vp6k8"] Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.867356 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmblv\" (UniqueName: \"kubernetes.io/projected/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-kube-api-access-vmblv\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.867487 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-utilities\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.867647 4580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-catalog-content\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.969581 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-catalog-content\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.969694 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmblv\" (UniqueName: \"kubernetes.io/projected/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-kube-api-access-vmblv\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.969750 4580 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-utilities\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.970224 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-utilities\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.970471 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-catalog-content\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:42 crc kubenswrapper[4580]: I0112 14:02:42.990524 4580 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmblv\" (UniqueName: \"kubernetes.io/projected/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-kube-api-access-vmblv\") pod \"certified-operators-vp6k8\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:43 crc kubenswrapper[4580]: I0112 14:02:43.171828 4580 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:43 crc kubenswrapper[4580]: I0112 14:02:43.617472 4580 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vp6k8"] Jan 12 14:02:43 crc kubenswrapper[4580]: I0112 14:02:43.711389 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp6k8" event={"ID":"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971","Type":"ContainerStarted","Data":"6a98d558c4f6eea7f8a29d0cb28a0afe955f068c4f3e018436432fb63b689be8"} Jan 12 14:02:44 crc kubenswrapper[4580]: I0112 14:02:44.722772 4580 generic.go:334] "Generic (PLEG): container finished" podID="ea71ca71-1b5e-4fa3-a9ad-64b30a97a971" containerID="22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811" exitCode=0 Jan 12 14:02:44 crc kubenswrapper[4580]: I0112 14:02:44.722894 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp6k8" event={"ID":"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971","Type":"ContainerDied","Data":"22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811"} Jan 12 14:02:45 crc kubenswrapper[4580]: I0112 14:02:45.282176 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:02:45 crc kubenswrapper[4580]: E0112 14:02:45.282565 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:02:46 crc kubenswrapper[4580]: I0112 14:02:46.750977 4580 generic.go:334] "Generic (PLEG): container finished" podID="ea71ca71-1b5e-4fa3-a9ad-64b30a97a971" containerID="872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e" exitCode=0 Jan 12 14:02:46 crc kubenswrapper[4580]: I0112 14:02:46.751638 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp6k8" event={"ID":"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971","Type":"ContainerDied","Data":"872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e"} Jan 12 14:02:47 crc kubenswrapper[4580]: I0112 14:02:47.780470 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp6k8" event={"ID":"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971","Type":"ContainerStarted","Data":"3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4"} Jan 12 14:02:47 crc kubenswrapper[4580]: I0112 14:02:47.804710 4580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vp6k8" podStartSLOduration=3.195227384 podStartE2EDuration="5.804691893s" podCreationTimestamp="2026-01-12 14:02:42 +0000 UTC" firstStartedPulling="2026-01-12 14:02:44.725679423 +0000 UTC m=+3363.769898113" lastFinishedPulling="2026-01-12 14:02:47.335143932 +0000 UTC m=+3366.379362622" observedRunningTime="2026-01-12 14:02:47.797532481 +0000 UTC m=+3366.841751171" watchObservedRunningTime="2026-01-12 14:02:47.804691893 +0000 UTC m=+3366.848910583" Jan 12 14:02:53 crc kubenswrapper[4580]: I0112 14:02:53.172279 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:53 crc kubenswrapper[4580]: I0112 14:02:53.172902 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:53 crc kubenswrapper[4580]: I0112 14:02:53.212262 4580 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:53 crc kubenswrapper[4580]: I0112 14:02:53.873353 4580 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:53 crc kubenswrapper[4580]: I0112 14:02:53.915670 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vp6k8"] Jan 12 14:02:55 crc kubenswrapper[4580]: I0112 14:02:55.847312 4580 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vp6k8" podUID="ea71ca71-1b5e-4fa3-a9ad-64b30a97a971" containerName="registry-server" containerID="cri-o://3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4" gracePeriod=2 Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.267887 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.282600 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:02:56 crc kubenswrapper[4580]: E0112 14:02:56.283077 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.345365 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmblv\" (UniqueName: \"kubernetes.io/projected/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-kube-api-access-vmblv\") pod \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.345518 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-utilities\") pod \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.345573 4580 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-catalog-content\") pod \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\" (UID: \"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971\") " Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.346266 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-utilities" (OuterVolumeSpecName: "utilities") pod "ea71ca71-1b5e-4fa3-a9ad-64b30a97a971" (UID: "ea71ca71-1b5e-4fa3-a9ad-64b30a97a971"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.358736 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-kube-api-access-vmblv" (OuterVolumeSpecName: "kube-api-access-vmblv") pod "ea71ca71-1b5e-4fa3-a9ad-64b30a97a971" (UID: "ea71ca71-1b5e-4fa3-a9ad-64b30a97a971"). InnerVolumeSpecName "kube-api-access-vmblv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.384282 4580 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea71ca71-1b5e-4fa3-a9ad-64b30a97a971" (UID: "ea71ca71-1b5e-4fa3-a9ad-64b30a97a971"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.448333 4580 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmblv\" (UniqueName: \"kubernetes.io/projected/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-kube-api-access-vmblv\") on node \"crc\" DevicePath \"\"" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.448366 4580 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-utilities\") on node \"crc\" DevicePath \"\"" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.448382 4580 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.857504 4580 generic.go:334] "Generic (PLEG): container finished" podID="ea71ca71-1b5e-4fa3-a9ad-64b30a97a971" containerID="3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4" exitCode=0 Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.857556 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp6k8" event={"ID":"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971","Type":"ContainerDied","Data":"3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4"} Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.857585 4580 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vp6k8" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.857596 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vp6k8" event={"ID":"ea71ca71-1b5e-4fa3-a9ad-64b30a97a971","Type":"ContainerDied","Data":"6a98d558c4f6eea7f8a29d0cb28a0afe955f068c4f3e018436432fb63b689be8"} Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.857617 4580 scope.go:117] "RemoveContainer" containerID="3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.882848 4580 scope.go:117] "RemoveContainer" containerID="872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.902325 4580 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vp6k8"] Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.908413 4580 scope.go:117] "RemoveContainer" containerID="22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.910082 4580 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vp6k8"] Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.936745 4580 scope.go:117] "RemoveContainer" containerID="3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4" Jan 12 14:02:56 crc kubenswrapper[4580]: E0112 14:02:56.937277 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4\": container with ID starting with 3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4 not found: ID does not exist" containerID="3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.937308 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4"} err="failed to get container status \"3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4\": rpc error: code = NotFound desc = could not find container \"3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4\": container with ID starting with 3aa0189cecf2b1038e088106215496e31f3c9532a95bf15572b6b5348182dad4 not found: ID does not exist" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.937333 4580 scope.go:117] "RemoveContainer" containerID="872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e" Jan 12 14:02:56 crc kubenswrapper[4580]: E0112 14:02:56.937537 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e\": container with ID starting with 872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e not found: ID does not exist" containerID="872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.937560 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e"} err="failed to get container status \"872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e\": rpc error: code = NotFound desc = could not find container \"872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e\": container with ID starting with 872beaaa7aebc951eef72cebd0aa04226863aa51ea8fe68cdfc8c9608f7fb49e not found: ID does not exist" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.937575 4580 scope.go:117] "RemoveContainer" containerID="22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811" Jan 12 14:02:56 crc kubenswrapper[4580]: E0112 14:02:56.937851 4580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811\": container with ID starting with 22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811 not found: ID does not exist" containerID="22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811" Jan 12 14:02:56 crc kubenswrapper[4580]: I0112 14:02:56.937870 4580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811"} err="failed to get container status \"22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811\": rpc error: code = NotFound desc = could not find container \"22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811\": container with ID starting with 22f89229be61b8ac6dd5aec6998dce32c8fd5a6e43e37fb796f2d503c1fdd811 not found: ID does not exist" Jan 12 14:02:57 crc kubenswrapper[4580]: I0112 14:02:57.291631 4580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea71ca71-1b5e-4fa3-a9ad-64b30a97a971" path="/var/lib/kubelet/pods/ea71ca71-1b5e-4fa3-a9ad-64b30a97a971/volumes" Jan 12 14:03:08 crc kubenswrapper[4580]: I0112 14:03:08.282169 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:03:08 crc kubenswrapper[4580]: E0112 14:03:08.283135 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:03:21 crc kubenswrapper[4580]: I0112 14:03:21.289074 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:03:21 crc kubenswrapper[4580]: E0112 14:03:21.290635 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:03:33 crc kubenswrapper[4580]: I0112 14:03:33.282591 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:03:33 crc kubenswrapper[4580]: E0112 14:03:33.283327 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:03:45 crc kubenswrapper[4580]: I0112 14:03:45.281898 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:03:45 crc kubenswrapper[4580]: E0112 14:03:45.282828 4580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdz6l_openshift-machine-config-operator(aaecc77f-21ca-4f15-86e0-0dff03d2ab7b)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" podUID="aaecc77f-21ca-4f15-86e0-0dff03d2ab7b" Jan 12 14:04:00 crc kubenswrapper[4580]: I0112 14:04:00.282906 4580 scope.go:117] "RemoveContainer" containerID="094e6fc847e202ee61872ce24e3a26d7ba32df37f59d98679a68486511a55fc9" Jan 12 14:04:00 crc kubenswrapper[4580]: I0112 14:04:00.501058 4580 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdz6l" event={"ID":"aaecc77f-21ca-4f15-86e0-0dff03d2ab7b","Type":"ContainerStarted","Data":"867d58b7977174db188b929a94ae7bdb34c0e2c4b2a76c42451147e0c4dc811b"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515131177570024454 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015131177571017372 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015131170507016505 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015131170507015455 5ustar corecore